A New Approach to Consensus: Swirlds HashGraph

(Special thanks to Leemon Baird, creator of the Swirlds Hashgraph Consensus Algorithm)

As many people here know, my interest in consensus mechanisms runs far and wide.  In the KPMG research report I co-authored "Consensus: Immutable Agreement for the Internet of Value", many consensus mechanisms were discussed. In Appendix 3 of the paper, many of the major players in the space discussed their consensus methodologies.  One consensus mechanism which wasn't in the paper was the Swirlds Hashgraph Consensus Algorithm. That whitepaper is a great read and this consensus mechanism holds quite a lot of promise.  I have had many discussions with its creator, Leemon Baird and this blog post comes from conversations, questions and emails about the topic.  Also at the end of the blog I asked Leemon to fill out the consensus questionnaire from the KPMG report and he graciously did. His answers appear at the end of this post.

What exactly is a hashgraph? 

A "hashgraph" is a data structure, storing a certain type of information, and updated according to a certain algorithm.   The data structure is a directed acyclic graph, where each vertex contains the hash of its two parent vertices. This could be called a Merkle DAG, and is used in git, and IPFS, and in other software.

The stored information is a history of how everyone has gossiped.  When Alice tells Bob everything she knows, during a gossip synch, Bob commemorates that occurrence by creating a new "event", which is vertex in the graph, containing the hash of his most recent event, and the hash of Alice's most recent event.  It also contains a timestamp, and any new transactions that Bob wants to create at that moment.  Bob digitally signs this event.  The "hashgraph" is simply the set of all known events.

The hashgraph is updated by gossip: each member repeatedly chooses another member at random, and gives them all the events that they don't yet know.  As the local copy of the hashgraph grows, the member runs the algorithm in the paper to determine the consensus order for the events (and the consensus timestamps).  That determines the order of the transactions, so they can be applied to the state, as specified by the app.

 

What are gossip protocols?

A "gossip protocol" means that information is spread by each computer calling up another computer at random, and sharing everything it knows that the other one doesn't.  It's been used for all sorts of things through the decades. I think the first use of the term "gossip protocol" was for sharing identity information, though the idea probably predates the term. There's a Wikipedia article with more of the history. In Bitcoin, the transactions are gossiped, and the mined blocks are gossiped.  

It's widely used because its so fast (information spreads exponentially fast) and reliable (a single computer going down can't stop the gossip).

The "gossip about gossip" idea is new with hashgraph, as far as I know.  There are many types of information that can be spread by gossip.  But having the information to gossip, be the history of the gossip itself is a novel idea.  

In hashgraph, it's called "gossip about gossip" rather than "gossip of gossip".  Similar to how your friends might "gossip about what Bob did" rather than "gossip of what Bob did".

Key Characteristics of Swirlds Hashgraph Consensus

  1.  Ordering and fairness of transactions are the centerpiece of Swirlds. Simply put, Swirlds seeks to fix the ordering problem found in the blockchain world today (due to different consensus methodologies that have trouble addressing this problem) by using Hashgraph Consensus and "gossip about gossip".
  2. Hashgraph can achieve consensus with no Proof of Work. So it can be used as an open system (non-permissioned) using Proof of Stake, or it can be used as a permissioned system without POW or POS. 
  3.  There's no mining. Any member can create a block (called an "event") at any time.
  4. It supports smart contract creation.
  5. Blocksize can be whatever size you want. When you create a block ("event"), you put in it any new transactions you want to create at that time, plus a few bytes of overhead. So the block ranges from a few bytes (for no transactions), to as big as you want it (for many transactions).  But since you're creating many blocks per second, there's no reason to make any particular block terribly big.
  6. The core hashgraph system is for distributed consensus of a set of transactions. So all nodes receive all data.  One can build a sharded, hierarchical system on top of that. But the core system is a replicated state machine. Data is  stored on each machine. But for the core system, the data is replicated.

Other Questions I asked Leemon Baird about the Whitepaper

Below are some questions I asked Leemon after reading the whitepaper. His answers are elaborate and very useful for those seeking to not only understand Hashgraph Consensus but also the inner workings of blockchains and the consensus algorithms that power them. 

1)  Why is fairness important?

Fairness allows new kinds of applications that weren't possible before.  This creates the fourth generation of distributed trust.

For some applications, fairness doesn't matter. If two coins are spent at about the same time, we don't care which one counts as "first", as long as we all agree.  If two people record their driver's license in the ledger at about the same time, we don't care which counts as being recorded "first". 

On the other hand, there are applications where fairness is of critical importance.  If you and I both bid on a stock on the New York Stock Exchange at the same time, we absolutely care which bid counts as being first!  The same is true if we both try to patent the same thing at the same time. Or if we both try to buy the same domain name at the same time. Or if we are involved in an auction. Or is we are playing an online game: if you shoot me and I dodge, it matters whether I dodged BEFORE you shot, or AFTER you shot.

So hashgraph can do all the things block chain does (with better speed, cost, proofs, etc).  But hashgraph can also do entirely new kinds of things that you wouldn't even consider doing with a block chain.

It's useful to think about the history of distributed trust as being in 4 generations:

1. Cryptocurrency

2. Ledgers

3. Smart Contracts

4. Markets

I think it's inevitable. Once you have a cryptocurrency, people will start thinking about storing other information in it, which turns it into a public ledger with distributed trust. 

Once you have the ledger storing both money and property, people will start thinking about smart contracts to allow you to sell property for money with distributed trust.

Once you have the ability to do smart contracts, people will start thinking about fair markets to match buyers and sellers.  And to do all the other things that fairness allows (like games, auctions, patent offices, etc).

Swirlds is the first system of the fourth generation.  It can do all the things of the first 3 generations (with speed, etc). But it can also do the things of the 4th generation.

 

 2) You mention internet speed and how faster bandwidth matters?  So it acts like the current state of electronic trading in the stock market.  Are you not worried about malicious actors with high speed connections taking over the network?  Kind of like how High Frequency Trading doe in the stock market using low latency trading mechanisms, co-locality and huge bandwidth are extremely advantageous for "winning" as Michael Lewis talks about in "Flash Boys"?

In hashgraph, a fast connection doesn't allow you to "take over the network". It simply allows you to get your message out to the world faster. If Alice creates a transaction, it will spread through gossip to everyone else exponentially fast, through the gossip protocol. This will take some number of milliseconds, depending on the speed of her Internet connection, and the size of the community. If Bob has a much faster connection, then he might create a transaction a few milliseconds later than her, but get it spread to the community before hers.  However, once her transaction has spread to most people, it is then too late for Bob to count as being earlier than her, even if Bob has infinite bandwidth.

This is analogous to the current stock market, except for one nice feature. If Bob wants an advantage of a few milliseconds, he can't just build a single, fast pipe to the single, central server. He instead needs a fast connection to everyone in the network. And the network might be spread across every continent.  So he'll just need to have a fast connection to the Internet backbone. That's the best he can do, and anyone can do that, so it isn't "unfair". 

In other words, the advantage of a fast connection is smaller than the advantage he could get in the current stock market. And it's fair. If the "server" is the entire community, then it is fair to say that whichever transaction reached the entire community first, will count as being "first". Bob's fast connection benefits him a little, but it also benefits the community by making the entire system work faster, so it's good.

"Flash Boys" was a great book, and I found it inspiring. Our system mitigates the worst parts of the existing system, where people pay to have their computers co-located in the same building as the central server, or pay huge amounts to use a single fast pipe tunneled through mountains. In a hashgraph system, there is no central server, so that kind of unfairness can't happen.

3) You mention in the whitepaper that increasing block size "can make the system of fairness worse". Why is that?

That's true for a POW system like Bitcoin.  If Alice submits a transaction, then miner Bob will want to include it in his block, because he's paid a few cents to do so.  But if Carol wants to get her transaction recorded in history before Alice's, she can bribe Bob to ignore Alice's transaction, and include only Carol's in the block. If Bob succeeds in mining the block, then Alice's transaction is unfairly moved to a later point in history, because she has to wait for the next miner to include her transaction.

If each block contains 1 transaction, then Alice has suffered a 1-slot delay in where her transaction appears in history. If each block contains a million transactions, then Alice has suffered a million-slot delay. In that sense, big blocks are worse than small blocks. Big blocks allow dishonest people to delay your transactions into a later position in the consensus order.

The comment about block size doesn't apply to leader-based systems like Paxos. In them, there isn't really a "block". The unfairness simply comes from the current leader accepting a transaction from Alice, but then delaying a long time before sending it out to be recorded by the community.  The comment also doesn't apply to hashgraph.

4) Can you explain how not remembering old blocks works? And why one just needs to know the most frequent blocks and how this doesn't fly in the face of longest chain rule?  

Hashgraph doesn't have a "longest chain rule".  In blockchain, you absolutely must have a single "chain", so if it ever forks to give you two chains, the community must choose to accept one and reject the other. They do so using the longest chain rule. But in hashgraph, forking is fine. Every block is accepted.  The hashgraph is an enormous number of chains, all woven together to form a single graph. We don't care about the "longest chain". We simply accept all blocks.  (In hashgraph, a block is called an "event").

What we have to remember is not the "most frequent block". Instead, we remember the state that results from the consensus ordering of the transactions. Imagine a cryptocurrency, where each transaction is a statement "transfer X coins from wallet Y to wallet Z". At some point, the community will reach a consensus on the exact ordering of the first 100 transactions. At that time, each member of the community can calculate exactly how many coins are in each wallet after processing those 100 transactions (in the consensus order), before processing transaction number 101.  They will therefore agree on the "state", which is the list of amounts of coins in all the non-empty wallets. Each of them digitally signs that state. They gossip their signatures. So then each member will end up having a copy of the state along with the signatures from most of the community.  This combination of the state and list of signatures is something that mathematically proves exactly how much money everyone had after transaction 100.  It proves it in a way that is transferrable: a member could show this to a court of law, to prove that Alice had 10 coins after transaction 100 and before transaction 101.  

At that point, each member can discard those first 100 transactions. And they can discard all the blocks ("events") that contained those 100 transactions.There's no need to keep the old blocks and transactions. Because you still have the state itself, signed by most of the community, proving that there was consensus on it.

Of course, you're also free to keep that old information. Maybe you want to have a record of it, or want to do audits, or whatever. But the point is that there's no harm in throwing it away. 

5) You mention that blockchains don't have a guarantee of Byzantine agreement, b/c a member never reaches certainty that agreement has been achieved. Can you elaborate on this and explain why Hashgraph can achieve this?

Bitcoin doesn't have Byzantine fault tolerance, because of how that's defined.  Hashgraph has it, because of the math proof in the paper.

In computer science, there is a famous problem called "The Byzantine Generals Problem".  Here's a simplified version. You and I are both generals in the Byzantine army. We need to decide whether to attack at dawn. If we both attack or both don't attack, we will be fine. But if only one of us attacks alone, he will be defeated, because he doesn't have enough forces to win by himself.

So, how can we coordinate? This is in an age before radio, so you can send me a messenger telling me to attack. But what if the messenger is captured, so I never get the message?  Clearly, I'm going to need to send a reply by messenger to let you know I got the message.  But what if the reply is lost?  Clearly, you need to send a reply to my reply to let me know it got through. But what if that is lost?  We could spend eternity replying to each other, and never really know for sure we are in agreement. There was actually a theater play that dramatized this problem.

The full problem is more complicated, with more generals, and with two types of generals. But that's the core of the problem.  The standard definition is that a computer system is "Byzantine fault tolerant", if it solves the problem in the following sense:

- assume there are N computers, communicating over the Internet

- each computer starts with a vote of YES or NO

- all computers need to eventually reach consensus, where we all agree on YES, or all agree on NO

- all computers need to know when the consensus has been reached

- more than 2/3 of the computers are "honest", which means they follow the algorithm correctly, and although an honest computer may go down for a while (and stop communicating), it will eventually come back up and start communicating again

- the internet is controlled by an attacker, who can delay and delete messages at will (except, if Alice keeps sending messages to Bob, the attacker eventually must allow one to get through; then if she keeps sending, he must eventually allow another one to get through, and so on)

- each computer starts with a vote (YES or NO), and can change that vote many times, but eventually a time must come when the computer "decides" YES or NO.  After that point, it must never again change its mind.

- all honest computers must eventually decide (with probability one), and all must decide the same way, and it must match the initial vote of at least one honest member.

That's just for a single YES/NO question.  But Byzantine fault tolerance can also be applied to more general problems.  For example, the problem of decided the exact ordering of the first 100 transactions in history.

So if a system is Byzantine fault tolerant, that means eventually all the honest members will eventually know the exact ordering of the first 100 transactions. And, furthermore, each member will reach a point in time where they know that they know it. In other words, their opinion doesn't just stop changing. They actually know a time when it is guaranteed that consensus has been achieved. 

Bitcoin doesn't do that. Your probability of reaching consensus grows after each confirmation. You might decide that after 6 confirmations, you're "sure enough".  But you're never mathematically certain. So Bitcoin doesn't have Byzantine fault tolerance. 

There are a number of discussions online about whether this matters. But, at least for some people, this is important.  

If you're interested in more details on Bitcoin's lack of Byzantine fault tolerance, we can talk about what happens if the internet is partitioned for some period of time. When you start thinking about the details, you actually start to see why Byzantine fault tolerance matters.

6) You mention in the whitepaper, "In hashgraph, every container is used, and none are discarded"? Why is this important and why is this not a waste?

In Bitcoin, you may spends lots of time and electricity mining a block, only to discover later that someone else mined a block at almost the same time, and the community ends up extending their chain instead of yours. So your block is discarded. You don't get paid. That's a waste. Furthermore, Alice may have given you a transaction that ended up in your block but not in that other one. So she thought her transaction had become part of the blockchain, and then later learned that it hadn't.  That's unfortunate.

In hashgraph, the "block" (event) definitely becomes part of the permanent record as soon as you gossip it. Every transaction in it definitely becomes part of the permanent record.  It may take some number of seconds before you know exactly what position it will have in history. But you **immediately** know that it will be part of history. Guaranteed.

In the terminology of Bitcoin, the "efficiency" of hashgraph is 100%.  Because no block is wasted.

Of course, after the transactions have become part of the consensus order and the consensus state is signed, then you're free to throw away the old blocks.  But that isn't because they failed to be used.  That's because they **were** used, and can now be safely discarded, having served their purpose.  That's different from the discarded blocks in Bitcoin, which are not used, and whose transactions aren't guaranteed to ever become part of the history / ledger.

7) On page 8 of the whitepaper you wrote " Suppose Alice has hashgraph A and Bob hash hashgraph B. These hashgraphs may be slightly different at any given moment, but they will always be consistent. Consistent means that if A and B both contain event X, then they will both contain exactly the same set of ancestors for X, and will both contain exactly the same set of edges between those ancestors. If Alice knows of X and Bob does not, and both of them are honest and actively participating, then we would expect Bob to learn of X fairly quickly, through the gossip protocol. But the consensus algorithm does not make any assumptions about how fast that will happen. The protocol is completely asynchronous, and does not make assumptions about timeout periods, or the speed of gossip, or the rate at which progress is made."   What if they are not honest? 

If Alice is honest, then she will learn what the group's consensus is.

If Bob is NOT honest, then he might fool himself into thinking the consensus was something other than what it was. That only hurts himself.

If more than 2/3 of the members are honest, then they are guaranteed to achieve consensus, and each of them will end up with a signed state that they can use to prove to outsiders what the consensus was.  

In that case, the dishonest members can't stop the consensus from happening.  The dishonest members can't get enough signatures to forge a bad "signed state".  The dishonest members can't stop the consensus from being fair.

By the way, that "2/3" number up above is optimal.  There is a theorem that says no algorithm can achieve Byzantine fault tolerance with a number better than 2/3. So that number is as good as it can be.

8) Are the elections mentioned in the whitepaper  to decide the order of transactions or information?

Yes.  Specifically, the elections decide which witness events are famous witnesses.  Then those famous witness events determine the order of events. Which determines the order of transactions (and consensus timestamps).

9) What makes yellow "strongly see" from the chart on page 8 of the whitepaper?

If Y is an ancestor of X, then X can "see" Y, because there is a path from X to Y that goes purely downward in the diagram.  If there are **many** such paths from X to Y, which pass through more than 2/3 of the members, then X can "strongly see" Y.  That turns out to be the foundation of the entire math proof.

(To be complete: for X to see Y, it must also be the case that no forks by the creator of Y are ancestors of X. But normally, that doesn't happen.)

10) Whats the difference btw weak BFT (Byzantine Fault Tolerance) and strong BFT? Which are you using?

Hashgraph is BFT.  It is strong BFT.

"Weak BFT" means "not really BFT, but we want to use the term anyway".  

Those aren't really technical terms.  A google search for "weak byzantine fault tolerance" (in quotes) says that phrase doesn't  occur even once on the entire web.  And "weak BFT" (in quotes) occurs 6 times, none of which refer to Byzantine stuff.

People like to use terms like "Byzantine" in a weaker sense than their technical definition.  The famous paper "Practical Byzantine Fault Tolerance" describes a system that, technically, isn't Byzantine Fault Tolerant at all.  My paper references two other papers that talk about that fact.  So speaking theoretically, those systems aren't actually BFT.  Hashgraph truly is BFT.

We can also talk about it practically, rather than theoretically.  The paper I referenced in my tech report talks about how simple attacks on the network can almost completely paralyze leader-based systems like PBFT or Paxos.  That's not too surprising. If everything is coordinated by a leader, then you can just flood that leader's single computer with packets, and shut down the entire network.  If there is a mechanism for them choosing a new leader (as Paxos has), you can switch to attacking the new leader.  

Systems without leaders, like Bitcoin and hashgraph, don't have that problem.

Some people have also used "Byzantine" in a weaker sense that is called being "synchronous".  This means that you assume an honest computer will **always** respond to messages within X seconds, for some fixed constant X.  Of course, that's not a realistic assumption if we are worried about attacks like I just described.  That's why it's important that systems like both Bitcoin and hashgraph are "asynchronous".  Some people even like to abuse that term by saying a system is "partially asynchronous". So to be clear, I would say that hashgraph is "fully asynchronous" or "completely asynchronous".  That just means we don't have to make any assumptions about how fast a computer might respond.  Computers can go down for arbitrarily-long periods of time. And when they come back up, progress continues where it left off, without missing a beat.

11) Do "Famous witnesses" decide which transactions come first?

Yes. They decide the consensus order of all the events. And they decide the consensus time stamp for all the events.  And that, in turn, determines the order and timestamp for the transactions contained within the events.

It's worth pointing out that a "witness" or a "famous witness" is an event, not a computer. There isn't a computer acting as a leader to make these decisions.  These "decisions" are virtually being made by the events in the hashgraph. Every computer looks at the hashgraph and calculates what the famous witness is saying. So they all get the same answer. There's no way to cheat.

12) On page 8 of the whitepaper you write, "This virtual voting has several benefits. In addition to saving bandwidth, it ensures that members always calculate their votes according to the rules." Who makes the rules?

The "rules" are simply the consensus algorithm given in the paper.  Historically, Byzantine systems that aren't leader based have been based on rounds of voting.  In those votes, the "rules" are, for example, that Alice must vote in round 10 in accordance with the majority of the votes she received from other people in round 9.  But since Alice is a person (or a computer), she might cheat, and vote differently. She might cheat by voting NO in round 10, even though she received mostly YES votes from others in round 9. 

But in the hashgraph, every member looks at the hashgraph and decides how Alice is supposed to vote in round 10, given the virtual votes she is supposed to have received in round 9.  Therefore, the real Alice can't cheat. Because the "voting" is done by the "virtual Alice" that lives on everyone else's computers.

There are also higher-level rules that are enforced by the particular app built on top of the Swirlds platform. For example, the rule that you can't spend the same coin twice.  But that's not what that sentence was talking about.

13) How are transactions validated and who validates them?

The Swirlds platform runs a given app on the computers of every member who is part of that shared world (a "swirld").  In Bitcoin terminology, the community of members is a "network" of "full nodes" (or of "miners"). The hashgraph consensus algorithm ensures that every app sees the same transactions in the same order. The app is then responsible for updating the state according to the rules of the application.  For example, in a cryptocurrency app, a "transaction" is a statement that X coins should be transferred from wallet Y to wallet Z. The app checks whether wallet Y has that many coins. If it does, the app performs the transfer, by updating its local record of how much is in Y and how much is in Z.  If Y doesn't have that many coins, then the app does nothing, because it knew the transaction was invalid.

Since everyone is running the same app (which is Java code, running in a sandbox), and since everyone ends up with the same transactions in the same order, then everyone will end up with the same state.  They will all agree exactly how many coins are in Y after the first 100 transactions. They will all agree on which transfers were valid and which were invalid.  And so, they will all sign that state. And that signed state is the replicated, immutable ledger.

14) What was the original motivation for creating Swirlds?

We can use the cloud to collaborate on a business document, or play a game, or run an auction. But it bothered me that "cloud" meant a central server, with all the costs and security issues that implies.  It bothered me a lot. 

It should be possible for anyone to create a shared world on the internet, and invite as many participants as they want, to collaborate, or buy and sell, or play, or create, or whatever.  There shouldn't be any expensive server. It should be fast and fair and Byzantine.  And the rules of the community should be enforced, even if no single individual is trusted by everyone. This should be what the internet looks like.  This is my vision for how cyberspace should run.  This is what we need.

But no such system existed.  Whenever I tried to design such a system, I kept running into roadblocks. It clearly needed to be built on a consensus system that didn't use much computation, didn't use much bandwidth, and didn't use much storage, yet would be completely fair, fast, and cheap.

I would work hard on it for days until I finally convinced myself it was impossible. Then, a few weeks later, it would start nagging at me again, and I'd have to go back to working intensely on it, until I was again convinced it was impossible.

This went on for a long time, until I finally found the answer. If there's a hashgraph, with gossip about gossip, and virtual voting, then you get fairness and speed and a math proof of Byzantine fault tolerance. When I finally had the complete algorithm and math proof, I then built the software and a company. The entire process was a pretty intense 3 years.  But in the end, it turned out to be a system that is very simple.  And which seems obvious in retrospect.

 SUMMARY:

The DAG with hashes is not new, and has been widely used. Using it to store the history of gossip ("gossip about gossip") is new.  

The consensus algorithm looks similar to voting-based Byzantine algorithms that have been around for decades. But the idea of using "virtual voting" (where no votes ever have to cross the internet) is new. 

A distributed database with consensus (a "replicated state machine") is not new. But a platform for apps that can respond to both the non-consensus and consensus order is new.

It appears that hashgraph and the Swirlds platform can do all the things that are currently being done with blockchain, and that hashgraph has greater efficiency. But hashgraph also offers new kinds of properties, which will allow new kinds of applications to be built.

Overall Consensus Methodology

What is the underlying methodology of the used consensus?

The Swirlds hashgraph consensus system is used to achieve consensus on the fair order of transactions. It also gives the consensus timestamps on when each transaction was received by the community. It also gives consensus on enforcement of rules, such as in smart contracts.

How many nodes are need to validate a transaction? (% vs number)  How would this impact a limited participation network?

Consensus is achieved when more than 2/3 of the community is online and participating. Almost a third of the community could be attackers, and they would be unable to stop consensus, or to unfairly bias what order becomes the consensus for the transactions.

Do all nodes need to be online for system to function?   Number of current nodes?

Over 2/3 of the nodes need to be online for consensus. If fewer are online, the transactions are still communicated to everyone online very quickly, and everyone will immediately know for certain that those transactions are guaranteed to be part of the immutable ledger. They just won't know the consensus order until more than2/3 come online.

Does the algorithm have the underlying assumption that the participants in the network are known ahead of time? 

No, that's not necessary.  Though it can be run that way, if desired.

Ownership of nodes - Consensus Provider or Participants of Network?

The platform can be used to create a network that is permissioned or not.

What are current stages of mechanism?

Transactions are put into "events", which are like blocks, where each miner can mine many blocks per second. There is never a need to slow down mining to avoid forking the chain. The events are spread by a gossip protocol. When Alice gossips with Bob, she tells Bob all of the events that she knows that he doesn't, and vice versa. After Bob receives those, he creates a new event commemorating that gossip sync, which contains the hash of the last event he created and the hash of the last event Alice created before syncing with him. He can also include in the event any new transactions he wants to create at that moment. And he signs the event. That's it. There is no need for any other communication, such as voting. There is no need for proof of work to slow down mining, because anyone can create events at any time. 

When is a transaction considered "safe" or "live"?

As soon as Alice hears of a transaction, she immediately verifies it and knows for certain that it will be part of the official history. And so does anyone she gossips with after that. After a short delay (seconds to a minute or two), she will know its EXACT location in history, and have a mathematical guarantee that this is the consensus order. That knowledge is not probabilistic (as in, after 6 confirmations, you're pretty sure). It's a mathematical guarantee.

What is the Fault Tolerance?  (How many nodes need to be compromised before everything is shut down?)

This is Byzantine fault tolerant as long as less than 1/3 of the nodes are faulty / compromised / attacking.  The math proof assumes the standard assumptions: attacking nodes can collude, and are allowed to mostly control the internet. Their only limit on control of the internet is that if Alice repeatedly sends Bob messages, they must eventually allow Bob to receive one.

Is there a forking vulnerability?

The consensus can't fork as long as less than 1/3 are faulty / attacking.

How are the incentives defined within a permissioned system for the participating nodes?

Different incentive schemes can be built on top of this platform.

How does a party take ownership of an asset?

This is a system for allowing nodes to create transactions, and the community to reach consensus on what transactions occurred, and in what order. Concepts like "assets" can be built on top of this platform, as defined by an app written on it.

Cryptography/Strength of Algorithm:

How are the keys generated?

Each member (node) generates its own public-private key pair when it joins.

Does the algorithm have a leader or no?

No leader.

How is a node behavior currently measured for errors?

If a node creates an invalid event (bad hashes or bad signature) then that invalid event is ignored by honest nodes during syncs. Errors in a node can't hurt the system as long as less than 1/3 of the nodes have errors.

Governance:

How are controls/governance enforced?

If an organization uses the platform to build a network, then that organization can structure governance in the way they desire.

Tokenization (if used):

Are there any transaction signing mechanism?

Every event is signed, which acts as a signature on the transactions within it. An app can be built on top of this platform that would define tokens or cryptocurrencies.

Performance:

What is current time measurement?  For transaction to be validated? For consensus to achieved?

The software is in an early alpha stage. The answers to this questionairre refer to what the platform software will have when it is complete. For a replicated database (every node gets every transaction), it should be able to run at the bandwidth limit, where it handles as many transactions per second as the bandwidth of each node allows, where each node receives and sends each transactions once (on average) plus a small amount of overhead bytes (a few percent size increase). For a hierarchical, sharded system (where a transaction is only seen by a subset of the nodes, and most nodes never see it), it should be possible to scale beyond that limit. But for now, the platform is assuming a replicated system where every node receives every transaction. 

Security:

Does your mechanism have Digital Signature?

Yes, it uses standards for signatures, hashes, and encryption (ECDSA, SHA-256, AES, SSL/TLS)

How does system ensure the synchrony of the network (what is time needed for the nodes to sync up with network?)

No synchrony is assumed. There is no assumption that an honest node will always respond within a certain number of seconds. The Byzantine fault tolerance proofs are for a fully asynchronous system. The community simply makes progress on consensus whenever the communication happens. If every computer goes to sleep, then progress continues as soon as they wake up.  It should even work well over sneaker-net, where devices only sync when they are in physical proximity, and it might take days or months for gossip to reach everyone. Even in that situation, the consensus mechanism should be fine, working slowly as the communication slowly happens. In normal internet connections with a small group, consensus can happen in less than a second.

Do the nodes have access to an internal clock/time mechanism to stay sufficiently accurate?

There is a consensus timestamp on an event, which is the median of the clocks of those nodes that received it. This median will be as accurate as the typical honest computer's clock. This consensus timestamp does NOT need to be accurate for reaching consensus on the ordering the events, or for anything important in the algorithm. But it can be useful to the applications built on top of this platform.

Privacy:

How does system ensure privacy?

The platform allows each member to define their own key pair, and use that as their identity. If an app is built on top of this platform to establish a network, the app designer can decide how members will be allowed to join, such as by setting up a CA for their keys, or by having votes for each member, or by using proof-of-stake based on a cryptocurrency, etc.  The app can also create privacy, such as by allowing multiple wallets for one user. But the platform simply manages consensus based on a key pair per node.

Does the system require verifiable authenticity of the messages delivered between the nodes (Is signature verification in place?)

Yes, everything is signed, and all comm channels are SSL encrypted. 

How does data encryption work?

All comm during a gossip sync is SSL/TLS encrypted, using a session key negotiated using the keys of the two participants.  If an app wants further encryption, such as encrypting data inside a transaction so that only a subset of the members can read it, then the app is free to do so, and some of the API functions in the platform help to make such an app easier to write.

Implementation Approach

What are current uses cases for Consensus Mechanism?

In addition to traditional use cases (cryptocurrency, public ledger, smart contracts), the consensus mechanism also gives fairness in the transaction ordering.  This can enable use cases where the order must be fair, such as a stock market, or an auction, or a contest, or a patent office, or a massively multiplayer online (MMO) game.

Who is currently working with (Venture Capitalist,  Banks, Credit Card companies, etc.) 

Ping Identity has announced a proof of concept product for Distributed Session Management built on the Swirlds platform. Swirlds, Inc. is currently funded by a mixture of investors including venture capital, strategic partner, and angel funding.