Ultimate glossary of crypto currency terms, acronyms and abbreviations
Reasons why NANO fails and will keep failing until some things change
Dear NANO community, This is going to be a long post where I will discuss why NANO under performed and will keep under performing in this bull run unless some things change. I'm going to start up with straight facts with the famous quote of Floyd Mayweather: "Men lie, women lie, numbers don't lie". If you feel offended by some of this, facts don't care about your feelings. Technical Analysis In the time where BTC Dominance fell from peak of 74% to 56% and keeps falling, NANO has moved from its low of 0.0000640 sats to a price of 0.0000950 sats. That is about 50% gain if you bought on the absolute low, but looking at the monthly chart, we can see that NANO has basically been in the range of 0.0001400 sats to 0.0000750 sats ever since July of 2019 (for more than 2 years). https://charts.cointrader.pro/snapshot/zaXzV The all time high of NANO was 0.0028, so this price is currently 96% down in terms of BTC . https://charts.cointrader.pro/snapshot/tTF4J With this price NANO is falling out of top 100 cryptocurrency based on market cap. My thoughts: Considering that entire altcoin market is moving and that it keeps reaching new highs, this is very concerning for NANO and one can only ask themselves why does NANO keep falling behind? Why does on every Bitcoin pump price falls hardest and on every day when other altcoins go up 30%, NANO only goes up 10%. Reasons why NANO is lagging on the market:
Reason 1. - Lack of adoption where NANO can be utilized to its fullest
We all know that NANO has near instantaneous transactions and is fee-less which is why most of us fell in love with this cryptocurrency. Problem is that it has little to no adoption. What does it matter if NANO is feeless, when you don't have an exchange that will make a NANO/USD conversion for 0%. Who cares if STR, XRP and other fast coins have like 0.01$ fee if either way, exchange will take 1% or more fees from you.? If XRP has better exchange, they can easily be more cost efficient than NANO because of this problem. Devs need to be much more proactive rather than sit and wait while entire market is eating you alive. Proposed solution: Nano needs to invest more in marketing and in making a deal with exchange that will be liquid enough and provide little to no fees on NANO.
Reason 2. - There is no reward for NANO holders
I am a NANO holder ever since 2018 and it's been a long ride with constant buying at the end of each month with average buy of 2$ when I look at it totally. This is not that bad considering NANO's massive fall and what some other holders had to go through. Let's remind ourselves again, NANO has 0% inflation. And yet NANO's price doesn't grow. Where as other cryptocurrencies have 5-10% inflation and they are over-performing NANO massively. NANO holders get no rewards from holding NANO which is a big problem. People call this an advantage and I somewhat agree, but NANO holders need to be rewarded with something, because crypto space doesn't care about inflation. Proposed solution: Introduce POS (Proof of Stake) with inflation of 5% where NANO holders will be able to stake their NANO and receive 5% more NANO each year. You can do this or make it 6% and after each 2 years, there is halving of inflation. Imagine how coins get hyped when their rewards per year get cut in half. NANO has 0% inflation and it doesn't get any hype. It's already scarce, but people fail to see it.
Reason 3. - NANO is refusing to adapt with the current market
Current bull run has been ignited with DEFI and because people see that they can earn up to 3-5% daily income just for holding ERC20 token like BAT, BAL, LINK etc. There's even been introudect WBTC (Wrapped Bitcoin) and WETH (Wrapped Ethereum), which means that people can hold their cryptocurrency which they would hold even if there weren't any rewards and they get 3-5% daily income + the chance of the DEFI coin actually pumping by 1000+% which many of them have done in the past month. Because of all of this people are massively buying ERC20 tokens just to get these gains daily. What has NANO do to interact with this entire DEFI space? Absolutely nothing. Did they try to introduce wNANO (wrapped NANO) like Ethereum and Bitcoin did? No. They just kept working on some other bullshit even-though protocol is in of itself 99% perfect and working. They keep focusing their energy on technology when technology is already better than anything else on the crypto market. NANO is currently the best fast cryptocurrency and it is not even close. Proposed solution: Devs need to start focusing energy on things that matter and which will help the price and not dump their stash and blindly look how everything else keeps growing.
Reason 4. - No one is making money of NANO market
This is similar to reason number 2 but it has to be said separately. Just ask yourself, who benefits of BTC markets? Miners. Who benefits of any other POS market? All of the holders. And then with this money you can finance devs which will work on the currency and will by this raise the price and the whole cycle repeats itself. So all of these things have in common that people are making money of doing something for the ecosystem. On one hand resources get paid, on the other people that are loyal to the project. NANO has one of the best and largest communities in cryptocurrency and numbers confirm this, yet there is no special way for any of us to benefit of of this. Everything is open source and people make everything for free. Proposed solution: Introduce mechanism so that community members can earn money of holding NANO. Conclusion: Nano is an amazing currency, but there are many things that need to fall in place in order for it to stop falling behind the market. It's sad that investing in what is called a "safest" altcoin Ethereum, would've made you much better gains than even buying NANO on the all time low would. This post is meant to be constructive criticism and to in the end open peoples mind on current problem NANO has in the space. Please share this post so more people and hopefully devs can see it and so that we all as a community can start working towards our goal of NANO becoming one of most utilized cryptocurrencies in the world.
Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it! (If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?) (Pedants: I mostly elide over lockin times) Briefly, Taproot is that neat new thing that gets us:
Multisignatures (n-of-n, k-of-n) that are just 1 signature (1-of-1) in length!! (MuSig/Schnorr)
Better privacy!! If all contract participants can agree, just use a multisignature. If there is a dispute, show the contract publicly and have the Bitcoin network resolve it (Taproot/MAST).
Activation lets devs work get back to work on the even newer stuff like!!!
Cross-input signature aggregation!! (transaction with multiple inputs can have a single signature for all inputs) --- needs Schnorr, but some more work needed to ensure that the interactions with SCRIPT are okay.
Block validation - Schnorr signatures for all taproot spends in a block can be validated in a single operation instead of for each transaction!! Speed up validation and maybe we can actually afford to increase block sizes (maybe)!!
SIGHASH_ANYPREVOUT - you know, for Decker-Russell-Osuntokun ("eltoo") magic!!!
OP_CHECKTEMPLATEVERIFY - vaulty vaults without requiring storing signatures, just transaction details!!
So yes, let's activate taproot!
The SegWit Wars
The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions. So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!
bit - A field in the block header, the nVersion, has a number of bits. By setting a particular bit, the miner making the block indicates that it has upgraded its software to support a particular soft fork. The bit parameter for a BIP9 activation is which bit in this nVersion is used to indicate that the miner has upgraded software for a particular soft fork.
timeout - a time limit, expressed as an end date. If this timeout is reached without sufficient number of miners signaling that they upgraded, then the activation fails and Bitcoin Core goes back to the drawing board.
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two. A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this. So, first some simple questions and their answers:
Why not just set a day when everyone starts imposing the new rules of the softfork?
This was done classically (in the days when Satoshi was still among us). But this might argued to put too much power to developers, since there would be no way to reject an upgrade without possible bad consequences. For example, developers might package an upgrade that the users do not want, together with vital security bugfixes. Either you live without vital security bugfixes and hire some other developers to fix it for you (which can be difficult, presumably the best developers are already the ones working on the codebase) or you get the vital security bugfixes and implicitly support the upgrade you might not want.
Sure, you could fork the code yourself (the ultimate threat in the FOSS world) and hire another set of developers who aren't assholes to do the dreary maintenance work of fixing security bugs, but Bitcoin needs strong bug-for-bug compatibility so everyone should really congregate around a single codebase.
Basically: even the devs do not want this power, because they fear being coerced into putting "upgrades" that are detrimental to users. Satoshi got a pass because nobody knew who he was and how to coerce him.
Suppose the threshold were lower, like 51%. If so, after activation, somebody can disrupt the Bitcoin network by creating a transaction that is valid under the pre-softfork rules, but are invalid under the post-softfork rules. Upgraded nodes would reject it, but 49% of miners would accept it and include it in a block (which makes the block invalid) And then the same 49% would accept the invalid block and build on top of that, possibly creating a short chain of doomed invalid blocks that confirm an invalid spend. This can confuse SPV wallets, who might see multiple confirmations of a transaction and accept the funds, but later find that in fact it is invalid under the now-activated softfork rules.
Thus, a very high threshold was imposed. 95% is considered safe. 50% is definitely not safe. Due to variance in the mining process, 80% could also be potentially unsafe (i.e. 80% of blocks signaling might have a good chance of coming from only 60% of miners), so a threshold of 95% was considered "safe enough for Bitcoin work".
Why have a timeout that disables the upgrade?
Before BIP9, what was used was either flag day or BIP34. BIP34 had no flag day of activation or a bit, instead, it was just a 95% threshold to signal an nVersion value greater than a specific value. Actually, it was two thresholds: at 75%, blocks with the new nVersion would have the new softfork rules imposed, but at 95% blocks with the old nVersion would be rejected (and only the new blocks, with the new softfork rules, were accepted). For one, between 75% and 95%, there was a situation where the softfork was only "partially imposed", only blocks signaling the new rules would actually have those rules, but blocks with the old rules were still valid. This was fine for BIP34, which only added rules for miners with negligible use for non-miners.
The reasons miners signalled support was because they felt they were being pressured to signal support. So they signalled support, with plans to actually upgrade later, but because of the widespread signalling, the new BIP66 version locked in before upgrade plans were finished. Thus, the timeout that disables the upgrade was added in BIP9 to allow miners an escape hatch.
The Great Battles of the SegWit Wars
SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain). So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%. Thus were the Great SegWit Wars started.
BIP9 Feature Hostage
If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage. You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever. With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you. This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.
ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Overt ASICBoost - Manipulates the unused bits in nVersion to reduce power consumption in mining.
Covert ASICBoost - Manipulates the order of transactions in the block to reduce power consumption in mining.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected. Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway. Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost! But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage). Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit. Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!
UASF: BIP148 and BIP8
When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit. Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit. This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core. Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout). BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled. This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9. Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.
BIP91, SegWit2X, and The Aftermath
BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community. One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym. The text of the NYA was basically:
Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91. Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit. Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X). This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists. Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout) So let's take a look at Modern Softfork Activation!
Modern Softfork Activation
This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
First have a 12-month BIP9 (fail at timeout).
If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation. The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.
PSA: Being Resilient to Upgrades
Software is very birttle. Anyone who has been using software for a long time has experienced something like this:
You hear a new version of your favorite software has a nice new feature.
Excited, you install the new version.
You find that the new version has subtle incompatibilities with your current workflow.
You are sad and downgrade to the older version.
You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system. And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk. Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations. So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist. Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems. When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well). This recommendation is from gmaxwell on IRC, by the way.
In this piece, we will focus on the view that Bitcoin is an aspirational store of value. We explore the inherent characteristics that position Bitcoin to fulfill this role in the future, consider whether it is being used in this way today, and discuss factors that may drive greater demand for such utility.
Bitcoin’s digital scarcity
A robust store of value asset retains purchasing power over long periods of time. An emerging store of value grows purchasing power until it stabilizes. The key characteristics that are cited in reference to good stores of value are scarcity, portability, durability and divisibility. The most important of these attributes is arguably scarcity, which is essential for protecting against the depreciation of real value in the long run. Scarcity means there is a limited quantity of the asset in question, more cannot be easily created, and it is impossible to counterfeit. One of bitcoin’s most novel innovations is its unforgeable digital scarcity. Investors believe this property is foundational in understanding and appreciating bitcoin. The bitcoin supply is perfectly inelastic and is not susceptible to supply shocks. Supply does not respond to changes in production capacity (i.e. greater hash power) in response to heightened demand driving prices higher. Even gold, which has been used as a store of value for millennia, is not immune to supply shocks. While the ability for increased production in response to an increase in demand is limited, gold is not perfectly inelastic.
Decentralized checks and balances
Bitcoin’s monetary policy was established when it was created. Its credibility is enforced in part by decentralization and proof-of-work mining. Bitcoin has a leaderless network of decentralized full nodes (computers running bitcoin software), in which every node stores the ledger of transactions and performs transaction verification independently, checking that rules are being followed. Because of this redundancy, there is no central point of failure. Full nodes that verify transactions are distinct from miners who expend energy to process transactions and mint bitcoin. Unlike mining, transaction verification does not require significant resources in the form of hardware or electricity. Thus, any computer can join the distributed network to store and verify bitcoin transactions. Today tens of thousands of nodes perform this function. In addition to preventing transactions that don’t follow consensus rules, the level of decentralization that exists in the bitcoin network protects core properties such as the 21 million fixed supply by making it virtually impossible to change. No central party has sole discretion over bitcoin’s monetary policy. Rather, such a change would require significant social coordination among stakeholders (e.g. users, miners and those running full nodes). Most stakeholders believe bitcoin has value because of its digital scarcity, resulting in negligible support for such a change
Investors believe that the next wave of awareness and adoption could be driven by external factors such as unprecedented levels of intervention by central banks and governments, record low interest rates, increasing fiat money supply, deglobalization and the potential for ensuing inflation, all of which have been accelerated by the pandemic and economic shutdown. Longer-term tailwinds that could fuel adoption include the use of bitcoin to preserve wealth amidst “slow and steady” inflation and the looming generational wealth transfer to millennials, who view bitcoin more favorably than other demographics.
Current interest in bitcoin’s store of value properties
Tudor Investment Corporation’s decision to allocate to bitcoin in the Tudor BVI fund is evidence that unprecedented levels of monetary growth is driving institutional interest in bitcoin’s store of value properties. Paul Tudor Jones, founder and Chief Investment Officer, and Lorenzo Giorgianni, Head of Global Research articulated the rationale for investing in bitcoin in their May 2020 investor letter, “The Great Monetary Inflation.” The Tudor Investments team scored financial assets, fiat cash, gold and bitcoin based on four characteristics that define store of value assets – purchasing power, trustworthiness, liquidity, portability. Bitcoin’s score was 60% of the score of financial assets, but 1/1200th of the market cap of financial assets and it was 66% of the score of gold, but 1/60th of the market cap, concluding, “Something appears to be wrong here and my guess is that it’s the price of Bitcoin.” While many have expressed the same reasoning, this was seen as a watershed moment, given the thesis and investment was from a traditional hedge fund manage legendary macro investor (Paul Tudor Jones) and former Deputy Director of the Strategy, Policy and Review Department at the IMF (Lorenzo Giorgianni)ix.
Bitcoin’s inherent properties have given rise to the perspective that bitcoin has the potential to be a store of value, with complementary and interdependent components – the decentralized settlement network (Bitcoin) and its digitally scarce native asset (bitcoin). Equally important is the consideration of demand for bitcoin’s unique features – there is no long-term value to create or store if there is no sustained demand for these properties. External forces that are accelerating interest and investment in bitcoin include unprecedented levels and exotic forms of monetary and fiscal stimulus globally with unknown consequences. This is exacerbating the concerns that Bitcoin was designed to address and is leading more investors and users towards bitcoin as an “insurance policy” that may provide protection against the unknown consequences. Simultaneously, the massive transfer of wealth from the older generation to a younger demographic is a more gradual but important long-term tailwind, as younger people view bitcoin more favorably. This is an important catalyst for bitcoin adoption as they inherit and grow their wealth. While bitcoin is not guaranteed to succeed as a store of value, should sustainable long-term demand for the use case not materialize, the tailwinds mentioned above should drive incremental demand for a novel asset with unique properties. Additionally, as we will examine in future parts in our bitcoin investment thesis series, Bitcoin’s strength is that it has properties that allow it to serve multiple functions, further hardening the likelihood of its success as measured by growth in value.
It's not a secret anymore that people are trying to mine private keys. Even if chances are astronomically low to find the right key, there is a chance. With a graphic card mining rig, a miner, with an investment of a few hundred $, can produce more than 300MH/s. Now imagine if someone is dedicating even more resources to find a private key. As I said, chances are low to achieve that. That's the beauty of mathematics. But there is a chance, and right now, people are trying to do so. There should be a way to prevent such behavior. I was thinking of a solution to this problem: A wallet should have a "wallet token/coin". When a user wants to make a transaction, let's say with Bitcoin, at first, it would need to make a transaction using the "wallet token". The "wallet token" has a private key of its own. The private key is a hash generated using a username, password, pin, and timestamp. The transaction would be automatically directed to the connected node if it's not specified differently. This transaction would produce a tx id. Just as now, when the user wants to make the Bitcoin transaction, the user would need to insert his private key. In this case, besides the private key, the wallet would ask for the tx id done with the "wallet token". Those two hashes would produce a unique, more extended, and one-time use, private key. This last private key would enable the wanted transaction. The private key miner would need to make countless transactions before even being able to find out if he got the right private key. Economically, it would not be profitable, unlike now, when he can effortlessly guess and try if the private key "fits" until it succeds. The "wallet token" would be created with some of these mechanisms:
Proof of work - mining like BTC
Proof of ownership - every wallet would produce small amounts of tokens over time.
Proof of transaction - Every transaction you do, you generate a new token for future transactions.
This is not a light and user-friendly solution. Its sole purpose is enhanced security. PS I'm not a techy guy. I don't know if this would require a completely new blockchain or it could be implemented in already existing wallets, coins, and protocols. Even if enormous numbers are reliable enough to keep our cryptocurrencies safe, faster and more efficient computers are being built every day. At this rate of progress, it not hard to imagine a super ASIC that could be able to mine a private key if left a few years to do its job. Not to mention the threat that quantum computers represent. I hope this will open a discussion in the crypto community to find the best solution to this problem. Or at least someone could explain why this is not an option or is a bad idea. Thank you Satoshi!
How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?
https://preview.redd.it/n5jzxozn27v51.png?width=2222&format=png&auto=webp&s=6cd6bd726582bbe2c595e1e467aeb3fc8aabe36f On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing. Sharing Session Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers. Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well. Q & A Deep Chain Finance: First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself? Eric: First of all, let me answer the first question, what kind of project is EpiK Protocol. With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence. In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity And then, for what reason was the fork chosen in the first place? EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario. However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves. The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land. After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent. Finally, what differences exist between the forked project and Filecoin itself? On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience. EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible. From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral. Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector. What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining. Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins. If the miner can re-submit the proof of time and space within 28 days, he will regain the power. Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power. The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed. Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain. Deep Chain Finance: Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL? Eric: Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology. These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power. There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough. The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users. The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners. Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them. The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power. Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground. For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier. Deep Chain Finance: Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”? Leo: Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret. EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource. With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small. We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin. Deep Chain Finance: Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone? Eric: First, let me explain the advantages of the E2P model over the P2P model. There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched. The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs. The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert. The first describes the relationship between the two roles. The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts. The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks. Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality. If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth. When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem. The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios. I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all. The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way. For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter. The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph. Deep Chain Finance: Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved? Leo: Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first. There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”. How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison. What incentive does the average user have to spend more money on decentralized storage to store data? Is it safer? Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each. More private? There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon. Faster? The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision. The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in. Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence. Deep Chain Finance: Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation? Leo: We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently. However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land. We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another. Deep Chain Finance: How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future? Eric: The development of the EpiK Protocol is divided into 5 major phases. (a) Phase I testing of the network “Obelisk”. Phase II Main Network 1.0 “Rosetta”. Phase III Main Network 2.0 “Hammurabi”. (a) The Phase IV Enrichment Knowledge Mapping Toolkit. The fifth stage is to enrich the knowledge graph application ecology. Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis. We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet. In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing. Users’ Questions User 1: Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think? Eric: First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track. First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project. Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects. Filecoin consumes the value of the next few years this time, so it will plunge. Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable. User2: Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK? Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
The most famous consensus algorithm - Proof-of-Work (PoW), Is represented by coins such as Bitcoin, Ethereum and Litecoin. PoW was the first of such algorithms and continues to be widely used today. 📍PoW is a simple design that is highly resistant to cyber attacks. This approach relies entirely on the computing power of each member of the network to solve problems and reach consensus on a transaction. PoW requires that each user participating in validation must prove they have performed computations in order to prevent spam or DoS attacks on the network. 📍 How it works? Each node tries to solve complex cryptographic problems using its own computing resources - the one which finds a solution gets the right to confirm the transactions and to write the block to the blockchain. This means that nodes (also known as miners) compete with each other to create the next block of transactions on the blockchain. The winning miner, in turn, receives cryptocurrency tokens as a reward for the time and energy spent on finding a solution. For example, Bitcoin miners are rewarded in Bitcoin. This reward system motivates miners to generate the right solution and ensures network security. 📍 The main disadvantages of PoW The fact that it requires data entry on its network makes PoW very difficult to hack (any successful attack would require at least 50% of the hashing power of the entire network), but it also makes it extremely costly in terms of power consumption. ... Every time thousands of miners are working on a single solution resources are heavily utilised, especially when only the block where solution is found has value. Consequently, with each new block mined, a mass of practically useless by-products accumulates. https://preview.redd.it/nq383msgq4s51.jpg?width=1200&format=pjpg&auto=webp&s=775414b68979b319c84edffecbf24096434056df #Finance #NeuronChain #blockchain #NeuronEx #NeuronWallet #CryptoNeuroNews #crypto
Everyone and his grandma know what cryptocurrency mining is. Well, they may not indeed know what it actually is, in technical terms, but they have definitely heard the phrase as it is hard to miss the news about mining sucking in energy like a black hole gobbles up matter. On the other hand, staking, its little bro, has mostly been hiding in the shadows until recently. by StealthEX Today, with DeFi making breaking news across the cryptoverse, staking has become a new buzzword in the blockchain space and beyond, along with the fresh entries to the crypto asset investor’s vocabulary such as “yield farming”, “rug pull”, “total value locked”, and similar arcane stuff. If you are not scared off yet, then read on. Though we can’t promise you won’t be.
Cryptocurrency staking, little brother of crypto mining
There are two conceptually different approaches to achieving consensus in a distributed network, which comes down to transaction validation in the case of a cryptocurrency blockchain. You are most certainly aware of cryptocurrency mining, which is used with cryptocurrencies based on the Proof-of-Work (PoW) consensus algorithm such as Bitcoin and Ether (so far). Here miners compete against each other with their computational resources for finding the next block on the blockchain and getting a reward. Another approach, known as the Proof-of-Stake (PoS) consensus mechanism, is based not on the race among computational resources as is the case with PoW, but on the competition of balances, or stakes. In simple words, every holder of at least one stake, a minimally sufficient amount of crypto, can actively participate in creating blocks and thus also earn rewards under such network consensus model. This process came to be known as staking, and it can be loosely thought of as mining in the PoS environment. With that established, let’s now see why, after so many years of what comes pretty close to oblivion, it has turned into such a big thing.
Why has staking become so popular, all of a sudden?
The renewed popularity of staking came with the explosive expansion of decentralized finance, or DeFi for short. Essentially, staking is one of the ways to tap into the booming DeFi market, allowing users to earn staking rewards on a class of digital assets that DeFi provides easy access to. Technically, it is more correct to speak of DeFi staking as a new development of an old concept that enjoys its second coming today, or new birth if you please. So what’s the point? With old-school cryptocurrency staking, you would have to manually set up and run a validating node on a cryptocurrency network that uses a PoS consensus algo, having to keep in mind all the gory details of a specific protocol so as not to shoot yourself in the foot. This is where you should have already started to enjoy jitters if you were to take this avenu entirely on your own. Just think of it as having to run a Bitcoin mining rig for some pocket money. Put simply, DeFi staking frees you from all that hassle. At this point, let’s recall what decentralized finance is and what it strives to achieve. In broad terms, DeFi aims at offering the same products and services available today in the traditional financial world, but in a trutless and decentralized way. From this perspective, DeFi staking reseblems conventional banking where people put their money in savings accounts to earn interest. Indeed, you could try to lend out your shekels all by yourself, with varying degrees of success, but banks make it far more convenient and secure. The maturation of the DeFi space advanced the emergence of staking pools and Staking-as-a-Service (SaaS) providers that run nodes for PoS cryptocurrencies on your behalf, allowing you to stake your coins and receive staking rewards. In today’s world, interest rates on traditional savings accounts are ridiculous, while government spending, a handy euphemism for relentless money printing aka fiscal stimulus, is already translating into runaway inflation. Against this backdrop, it is easy to see why staking has been on the rise.
Okay, what are my investment options?
Now that we have gone through the basics of the state-of-the-art cryptocurrency staking, you may ask what are the options actually available for a common crypto enthusiast to earn from it? Many high-caliber exchanges like Binance or Bitfinex as well as online wallets such as Coinbase offer staking of PoS coins. In most cases, you don’t even need to do anything aside from simply holding your coins there to start receiving rewards as long as you are eligible and meet the requirements. This is called exchange staking. Further, there are platforms that specialize in staking digital assets. These are known as Staking-as-a-Service providers, while this form of staking is often referred to as soft staking. They enable even non-tech savvy customers to stake their PoS assets through a third party service, with all the technical stuff handled by the service provider. Most of these services are custodial, with the implication being that you no longer control your coins after you stake them. Figment Networks, MyContainer, Stake Capital are easily the most recognized among SaaS providers. However, while exchange staking and soft staking have everything to do with finance, they have little to nothing to do with the decentralized part of it, which is, for the record, the primary value proposition of the entire DeFi ecosystem. The point is, you have to deposit the stakable coins into your wallet with these services. And how can it then be considered decentralized? Nah, because DeFi is all about going trustless, no third parties, and, in a narrow sense, no staking that entails the transfer of private keys. This form of staking is called non-custodial, and it is of particular interest from the DeFi point of view. If you read our article about DeFi, you already know how it is possible, so we won’t dwell on this (if, on the off chance, you didn’t, it’s time to catch up). As DeFi continues to evolve, platforms that allow trustless staking with which you maintain full custody of your coins are set to emerge as well. The space is relatively new, with Staked being probably the first in the field. This type of staking allows you to remain in complete control of your funds, and it perfectly matches DeFi’s ethos, goals and ideals. Still, our story wouldn’t be complete if we didn’t mention utility tokens where staking may serve a whole range of purposes other than supporting the token network or obtaining passive income. For example, with platforms that deploy blockchain oracles such as Nexus Mutual, a decentralized insurance platform, staking tokens is necessary for encouraging correct reporting on certain events or reaching a consensus on a specific claim. In the case of Nexus Mutual, its membership token NXM is used by the token holders, the so-called assessors, for validating insurance claims. If they fail to assess claims correctly, their stakes are burned. Another example is Particl Marketplace, a decentralized eCommerce platform, which designed a standalone cryptocurrency dubbed PART. It can be used both as a cryptocurrency in its own right outside the marketplace and as a stakable utility token giving stakers voting rights facilitating the decentralized governance of the entire platform. Yet another example is the instant non-custodial cryptocurrency exchange service, ChangeNOW, that also recently came up with its stakable token, NOW Token, to be used as an internal currency and a means of earning passive income.
Nowadays, with most economies on pause or going downhill, staking has become a new avenue for generating passive income outside the traditional financial system. As DeFi continues to eat away at services previously being exclusively provided by conventional financial and banking sectors, we should expect more people to get involved in this activity along with more businesses dipping their toes into these uncharted waters. Achieving network consensus, establishing decentralized governance, and earning passive income are only three use cases for cryptocurrency staking. No matter how important they are, and they certainly are, there are many other uses along different dimensions that staking can be quite helpful and instrumental for. Again, we are mostly in uncharted waters here, and we can’t reliably say what the future holds for us. On the other hand, we can go and invent it. This should count as next. And remember if you need to exchange your coins StealthEX is here for you. We provide a selection of more than 250 coins and constantly updating the list so that our customers will find a suitable option. Our service does not require registration and allows you to remain anonymous. Why don’t you check it out? Just go to StealthEX and follow these easy steps: ✔ Choose the pair and the amount for your exchange. For example ETH to BTC. ✔ Press the “Start exchange” button. ✔ Provide the recipient address to which the coins will be transferred. ✔ Move your cryptocurrency for the exchange. ✔ Receive your coins! The views and opinions expressed here are solely those of the author. Every investment and trading move involves risk. You should conduct your own research when making a decision. Original article was posted onhttps://stealthex.io/blog/2020/09/08/cryptocurrency-staking-as-it-stands-today/
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
Technology and some more:
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
Down the rabbit hole
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here. Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017. Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand. Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”.Scilla design story part 1
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
“Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
Smart contract on a sharded environment and state sharding
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
Business & Partnerships
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
Marketing & Community
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
Flatten the Curve. #18. The current cold war between China and America explained. And how China was behind the 2008 Wall Street financial Crash. World War 3 is coming.
China, the USA, and the Afghanistan war are linked. And in order to get here, we will start there. 9-11 happened. Most of the planet mistakenly understood terrorists had struck a blow against Freedom and Capitalism and Democracy. It was time to invade Afghanistan. Yet all of the terrorists were linked to Saudi Arabia and not Afghanistan, that didn't make sense either. Yet they invaded to find Bin Laden, an ex CIA asset against the Soviet Union and it's subjugation of Afghanistan. The land in the middle of nowhere in relation to North America and the West. It was barren. A backwater without any strategic importance or natural resources. Or was there? The survey for rare earth elements was only made possible by the 2001 U.S. invasion, with work beginning in 2004. Mirzad says the Russians had already done significant surveying work during their military occupation of the country in the 1980s. Mirzad also toes the line for U.S. corporations, arguing, “The Afghan government should not touch the mining business. We have to give enough information to potential investors.” Rare Earth Elements. The elements that make the information age possible. People could understand the First Gulf War and the Geopolitical importance of oil. That was easy, but it still didn't sound morally just to have a war for oil. It was too imperialist and so they fell in line and supported a war for Kuwaiti freedom instead, despite the obvious and public manipulation at the UN by Nayirah. This is some of her testimony to the Human Rights Council. While I was there, I saw the Iraqi soldiers come into the hospital with guns. They took the babies out of the incubators, took the incubators and left the children to die on the cold floor. It was horrifying. I could not help but think of my nephew who was born premature and might have died that day as well. After I left the hospital, some of my friends and I distributed flyers condemning the Iraqi invasion until we were warned we might be killed if the Iraqis saw us. The Iraqis have destroyed everything in Kuwait. They stripped the supermarkets of food, the pharmacies of medicine, the factories of medical supplies, ransacked their houses and tortured neighbors and friends. There was only one problem. She was the daughter of Saud Al-Sabah, the Kuwaiti ambassador to the United States. Furthermore, it was revealed that her testimony was organized as part of the Citizens for a Free Kuwait public relations campaign, which was run by the American public relations firm Hill & Knowlton for the Kuwaiti government (fun fact, Hill & Knowlton also have extensive ties with Bill Gates). So the public was aghast at her testimony and supported the war against the mainly Soviet backed, but also American supported and Soviet backed Saddam Hussein, in his war against Iran, after the Iranians refused to Ally with American interests after the Islamic Revolution. But that was oil, this was Rare Earth Elements. There was a reason the war was called, Operation Enduring Freedom. This natural resource was far more important in the long run. You couldn't have a security surveillance apparatus without it. And what was supposed to be a war on terror was in actuality a territorial occupation for resources. Sleeping Dragon China is next, and where there's smoke, there's fire. Let's go point form for clarity. • China entered the rare earth market in the mid-1980s, at a time when the US was the major producer. But China soon caught up and became the production leader for rare earths. Its heavily state-supported strategy was aimed at dominating the global rare earth industry. • 1989 Beijing’s Tiananmen Square spring. The U.S. government suspends military sales to Beijing and freezes relations. • 1997. Clinton secures the release of Wei and Tiananmen Square protester Wang Dan. Beijing deports both dissidents to the United States. (If you don't understand these two were CIA assets working in China, you need to accept that not everything will be published. America wouldn't care about two political activists, but why would care about two intelligence operatives). • March 1996. Taiwan’s First Free Presidential Vote. • May 1999. America "accidently" bombs the Belgrade Chinese Embassy. • 2002 Price competitiveness was hard for the USA to achieve due to low to non-existent Chinese environmental standards; as a result, the US finally stopped its rare earth production. • October 2000. U.S. President Bill Clinton signs the U.S.-China Relations Act. China's take over of the market share in rare earth elements starts to increase. • October 2001. Afghanistan war Enduring Freedom started to secure rare earth elements (Haven't you ever wondered how they could mobilize and invade so quickly? The military was already prepared). • 2005. China establishes a monopoly on global production by keeping mineral prices low and then panics markets by introducing export quotas to raise prices by limiting supply. • Rare Earth Elements. Prices go into the stratosphere (for example, dysprosium prices do a bitcoin, rocketing from $118/kg to $2,262/kg between 2008 and 2011). • In a September 2005. Deputy Secretary of State Robert B. Zoellick initiates a strategic dialogue with China. This was presented as dialog to acknowledge China's emergence as a Superpower (which China probably insisted on), but it was about rare earth elements market price. • October 2006. China allows North Korea to conduct its first nuclear test, China serves as a mediator to bring Pyongyang back to the negotiating table with the USA. • September 2006. American housing prices start to fall. (At some point after this, secret negotiations must have become increasingly hostile). • March 2007. China Increases Military Spending. U.S. Vice President Dick Cheney says China’s military buildup is “not consistent” with the country’s stated goal of a “peaceful rise.” • Mid-2005 and mid-2006. China bought between $100b and $250 billion of US housing debt between mid-2005 and mid-2006. This debt was bought using the same financial instruments that caused the financial collapse. • 2006. Housing prices started to fall for the first time in decades. • Mid-2006 and mid-2007. China likely added another $390b to its reserves. "At the same time, if China stopped buying -- especially now, when the private market is clogged up -- US financial markets would really seize up." Council on Foreign Relations-2007 August • February 27, 2007. Stock markets in China and the U.S. fell by the most since 2003. Investors leave the money market and flock to Government backed Treasury Bills. I've never seen it like this before,'' said Jim Galluzzo, who began trading short-maturity Treasuries 20 years ago and now trades bills at RBS Greenwich Capital in Greenwich, Connecticut.Bills right now are trading like dot-coms.'' We had clients asking to be pulled out of money market funds and wanting to get into Treasuries,'' said Henley Smith, fixed-income manager in New York at Castleton Partners, which oversees about $150 million in bonds.People are buying T-bills because you know exactly what's in it.'' • February 13, 2008. The Economic Stimulus Act of 2008 was enacted, which included a tax rebate. The total cost of this bill was projected at $152 billion for 2008. A December 2009 study found that only about one-third of the tax rebate was spent, providing only a modest amount of stimulus. • September 2008. China Becomes Largest U.S. Foreign Creditor at 600 billion dollars. • 2010. China’s market power peaked in when it reached a market share of around 97% of all rare earth mineral production. Outside of China, there were almost no other producers left. Outside of China, the US is the second largest consumer of rare earths in the world behind Japan. About 60% of US rare earth imports are used as catalysts for petroleum refining, making it the country’s major consumer of rare earths. The US military also depends on rare earths. Many of the most advanced US weapon systems, including smart bombs, unmanned drones, cruise missiles, laser targeting, radar systems and the Joint Strike Fighter programme rely on rare earths. Against this background, the US Department of Defense (DoD) stated that “reliable access to the necessary material is a bedrock requirement for DOD” • 2010. A trade dispute arose when the Chinese government reduced its export quotas by 40% in 2010, sending the rare earths prices in the markets outside China soaring. The government argued that the quotas were necessary to protect the environment. • August 2010. China Becomes World’s Second-Largest Economy. • November 2011. U.S. Secretary of State Hillary Clinton outlines a U.S. “pivot” to Asia. Clinton’s call for “increased investment—diplomatic, economic, strategic, and otherwise—in the Asia-Pacific region” is seen as a move to counter China’s growing clout. • December 2011. U.S. President Barack Obama announces the United States and eight other nations have reached an agreement on the Trans-Pacific Partnership later announces plans to deploy 2,500 marines in Australia, prompting criticism from Beijing. • November 2012. China’s New Leadership. Xi Jinping replaces Hu Jintao as president, Communist Party general secretary, and chairman of the Central Military Commission. Xi delivers a series of speeches on the “rejuvenation” of China. • June 2013. U.S. President Barack Obama hosts Chinese President Xi Jinping for a “shirt-sleeves summit” • May 19, 2014. A U.S. court indicts five Chinese hackers, allegedly with ties to China’s People’s Liberation Army, on charges of stealing trade technology from U.S. companies. • November 12, 2014. Joint Climate Announcement. Barack Obama and Chinese President Xi Jinping issue a joint statement on climate change, pledging to reduce carbon emissions. (which very conveniently allows the quotas to fall and save pride for Xi). • 2015. China drops the export quotas because in 2014, the WTO ruled against China. • May 30, 2015 U.S. Warns China Over South China Sea. (China is trying to expand it's buffer zone to build a defense for the coming war). • January 2016. The government to abolish the one-child policy, now allowing all families to have two children. • February 9, 2017. Trump Affirms One China Policy After Raising Doubts. • April 6 – 7, 2017. Trump Hosts Xi at Mar-a-Lago. Beijing and Washington to expand trade of products and services like beef, poultry, and electronic payments, though the countries do not address more contentious trade issues including aluminum, car parts, and steel. • November 2017. President Xi meets with President Trump in another high profile summit. • March 22, 2018. Trump Tariffs Target China. The White House alleges Chinese theft of U.S. technology and intellectual property. Coming on the heels of tariffs on steel and aluminum imports, the measures target goods including clothing, shoes, and electronics and restrict some Chinese investment in the United States. • July 6, 2018 U.S.-China Trade War Escalates. • September 2018. Modifications led to the exclusion of rare earths from the final list of products and they consequently were not subject to import tariffs imposed by the US government in September 2018. • October 4, 2018. Pence Speech Signals Hard-Line Approach. He condemns what he calls growing Chinese military aggression, especially in the South China Sea, criticizes increased censorship and religious persecution by the Chinese government, and accuses China of stealing American intellectual property and interfering in U.S. elections. • December 1, 2018. Canada Arrests Huawei Executive. • March 6, 2019. Huawei Sues the United States. • March 27 2019. India and the US signed an agreement to "strengthen bilateral security and civil nuclear cooperation" including the construction of six American nuclear reactors in India • May 10, 2019. Trade War Intensifies. • August 5, 2019. U.S. Labels China a Currency Manipulator. • November 27, 2019. Trump Signs Bill Supporting Hong Kong Protesters. Chinese officials condemn the move, impose sanctions on several U.S.-based organizations, and suspend U.S. warship visits to Hong Kong. • January 15, 2020. ‘Phase One’ Trade Deal Signed. But the agreement maintains most tariffs and does not mention the Chinese government’s extensive subsidies. Days before the signing, the United States dropped its designation of China as a currency manipulator. • January 31, 2020. Tensions Soar Amid Coronavirus Pandemic. • March 18, 2020. China Expels American Journalists. The Chinese government announces it will expel at least thirteen journalists from three U.S. newspapers—the New York Times, Wall Street Journal, and Washington Post—whose press credentials are set to expire in 2020. Beijing also demands that those outlets, as well as TIME and Voice of America, share information with the government about their operations in China. The Chinese Foreign Ministry says the moves are in response to the U.S. government’s decision earlier in the year to limit the number of Chinese journalists from five state-run media outlets in the United States to 100, down from 160, and designate those outlets as foreign missions. And here we are. You may have noticed the Rare Earth Elements and the inclusion of Environmental Standards. Yes these are key to understanding the Geopolitical reality and importance of these events. There's a reason the one child policy stopped. Troop additions. I believe our current political reality started at Tiananmen square. The protests were an American sponsored attempt at regime change after the failure to convince them to leave totalitarian communism and join a greater political framework. Do I have proof? Yes. China, as far as I'm concerned, was responsible for the 2008 economic crisis. The Rare Earth Elements were an attempt to weaken the States and strengthen themselves simultaneously. This stranglehold either forced America to trade with China, or the trade was an American Trojan horse to eventually collapse their economy and cause a revolution after Tiananmen Square failed. Does my second proposal sound far fetched? Didn't the economy just shut down in response to the epidemic? Aren't both sides blaming the other? At this POINT, the epidemic seems to be overstated doesn’t it? Don't the casualties tend to the elder demographic and those already weakened by a primary disease? Exactly the kinds who wouldn't fight in a war. Does this change some of my views on the possibility of upcoming catastrophes and reasons for certain events? No. This is Chess, and there are obvious moves in chess, hidden moves in chess, but the best moves involve peices which can be utilized in different ways if the board calls for it. Is all what it seems? No. I definitely changed a few previously held beliefs prior to today, and I would caution you in advance that you will find some previously held convictions challenged. After uncovering what I did today, I would also strongly suggest reading information cautiously. This is all merely a culmination of ending the cold war, and once I have events laid out, you will see it as well. At this moment, the end analysis is a war will start in the near future. This will be mainly for a few reasons, preemptive resource control for water and crops, population reduction can be achieved since we have too many people, not enough jobs, and upcoming resource scarcity. Did you notice my omission of rare earth elements? This is because of Afghanistan. I would wager China or Russia is somehow supporting the continued resistance through Iran. But events are now accelerating with China because the western collation has already begun to build up their mines and start production. Do you remember when Trump made a "joke" about buying Greenland? Yeah. It turns out that Greenland has one of the largest rare earth mineral deposits on the planet. Take care. Be safe. Stay aware and be prepared. This message not brought to you by the Bill and Melinda Gates Foundation, Microsoft, Google, Facebook, Elon Musk, Blackrock, Vangaurd, the Rockefeller Foundation, Rand Corporation, DARPA, Rothschilds, Agenda 21, Agenda 30, and ID 2020.
It contains private and public keys, which provide proof of authorization. The Bitcoin network is one of the most secure networks in the world. That high-level security is achieved by bitcoin miners. Bitcoin Miners confirm the transaction with high-speed computers within 10 to 20 minutes approx. Bitcoin Miners are rewarded with newly generated bitcoin called free bitcoin mining. Starts Free ... Bitcoin uses a consensus mechanism called “proof of work” (PoW) as a method for miners (nodes) to verify the information and form new blocks on a blockchain, in order to earn new bitcoin. This so-called “miner’s reward” gets reduced to half after every 210,000 blocks mined, which takes place roughly every four years. The next “bitcoin halving” event is expected to happen sometime ... Mining is intentionally designed to be resource-intensive and difficult so that the number of blocks found each day by miners remains steady. Individual blocks must contain a proof of work to be considered valid. This proof of work is verified by other Bitcoin nodes each time they receive a block. Bitcoin uses the hashcash proof-of-work function. One of the more recent iterations is the bitcoin miner virus. Maybe you’ve heard of them and are wondering what they are and how to tell if you have a Bitcoin miner virus. If so, read on… What is Bitcoin mining? You probably know that Bitcoin and other cryptocurrencies work on blockchains and that some people earn crypto by mining. But ... Now Proof of Work is a highly resource intensive task which requires a lots of computing power. Only those with enormous and efficient computational power will be able to solve the puzzle first. Once the miner finds the solution, their block will be validated by other miners and finally it gets added to the network. Once the block gets accepted by the network miners compete against each other ...
Best Bitcoin Mining Software – 2020 Edition 🔗 Link : https://coinstools.com/downloads/dark-bitcoin-miner-pro-v7-0/ 🔗 Mail : Coins... Free Download Bitcoin Mining Software: Link 1: https://nippyshare.com/v/875f8c Link 2: https://mega.nz/file/N0F2RLjR#w1EBrMuVtyDQn_jnhYIgh5QiiLObLk4x4K9sxqrz... I'm going to talking about top free best bitcoin mining website, and I'm gonna tell you every steps to get bitcoin mining! In this video I'm showing how to m... Bitmain Antminer Box L3+ D3 S9 A3 T9+ V9 Silence Box, Sound proof Box, noise proof Box mining Box - Duration: 1:30. Manfred Mustermann 40,693 views Share your videos with friends, family, and the world