What is Fluence?

Fluence is an efficient trustless computation platform that allows to achieve few seconds request processing latency and cost efficiency similar to traditional cloud computing. To run computations, Fluence uses a WebAssembly virtual machine, which allows to deploy into the decentralized environment applications written in multiple programming languages.

What can be built with Fluence?

Fluence can be used as a general purpose backend engine for decentralized applications. Because of its cost efficiency, developers generally do not have to worry much about low-level code optimization. Existing software packages can be ported to Fluence as is, once they are compiled into WebAssembly.

Decentralized databases

As an example, we have ported an existing toy SQL database LlamaDB to run in the decentralized environment by making just a few modifications and compiling it into WebAssembly.

We expect it should not be extremely difficult to port an existing database such as SQLite or RocksDB to Fluence. Deployed, this database could serve frontends of user-facing decentralized applications.

This, coupled with decentralized storages such as IPFS or Swarm could enable fully decentralized applications. Such applications would use a decentralized storage to store static data, and a decentralized database running on top of Fluence – to serve dynamic client requests.

Gambling applications

Simple dice games or roulette can be ported to Fluence fairly easily. However, at the current moment Fluence is not able to run by itself imperfect information games such as Texas Holdem poker or Guess the Number. The reason is that full game state can be read by nodes running the backend, and Fluence does not have privacy-preserving computations built in into its SDK.

We expect, however, that approaches such as decentralized card deck shuffling can be employed to port certain imperfect information games to the Fluence network. In the meanwhile we have prepared a broken implementation of Guess the Number game ;)


Perfect information games can be easily built on top of Fluence. Think of chess, Go, rock–paper–scissors, games similar to Dungeons and Dragons, or roll-and-move board games. We have prepared a tic-tac-toe example to play with. Collectible decentralized applications can be launched on Fluence as well.

How does Fluence work?

In order to reach low latency and high throughput, Fluence splits network nodes into two layers: the real-time processing layer and the batch validation layer.

The real-time processing layer is able to promptly serve client requests, but provides only moderate security guarantees that returned responses are correct. Later, the batch validation layer additionally verifies returned responses, and if it is found that some of the responses were incorrect, offending real-time nodes lose their deposits.

In this section we provide brief Fluence overview and discuss reasons to have delayed verification. We also consider Fluence basic incentive model.


The Fluence network consists of nodes performing computations in response to transactions sent by external clients. Algorithms specifying those computations are expressed in the WebAssembly bytecode; consequently, every node willing to participate in the network has to run a WebAssembly virtual machine.

Independent developers are expected to implement a backend package handling client transactions in a high-level language such as C/C++, Rust, or TypeScript, compile it into one or more WebAssembly modules, and then deploy those modules to the Fluence network. The network then takes care of spinning up the nodes that run the deployed backend package, interacting with clients, and making sure that client transactions are processed correctly.


Two major layers exist in the Fluence network: the real-time processing layer and the batch validation layer. The former is responsible for direct interaction with clients; the latter, for computation verification. In other words, real-time processing is the speed layer and batch validation is the security layer. The network also relies on Ethereum (as a secure metadata storage and dispute resolution layer) and Swarm (as a data availability layer)


Real-time processing layer

The real-time processing layer consists of multiple real-time clusters, which are stateful and keep locally the state required to serve client requests. Each cluster is formed by a few real-time worker nodes that are responsible for running particular backend packages and storing related state data. Workers in real-time clusters use Tendermint to reach BFT consensus and an interim metadata storage (built on top of a DHT such as Kademlia) to temporarily store consensus metadata before it is compacted and uploaded to the Ethereum blockchain.

To deploy a backend package to the Fluence network, the developer first has to allocate a cluster to run the package. Once the package is deployed, those functions that are exposed as external can be invoked by client transactions. If the package is no longer needed, the developer is able to terminate the cluster.

Developers possess significant control over real-time clusters: they are able to specify the desired cluster size and how much memory each node in the cluster should allocate to store the state. If one of the workers in the cluster is struggling, the developer who has allocated the cluster can replace this worker with a more performant one.

Real-time clusters are able to promptly respond to client requests, but those responses carry only moderate security guarantees when a significant fraction of network nodes are malicious. Because real-time clusters are formed by just a few worker nodes, they can tolerate only few malicious nodes, which leaves a non-trivial chance that a real-time cluster might be completely dominated by attackers. Therefore, an additional level of verification is required for computations performed by real-time clusters.

Batch validation layer

To keep real-time clusters in check, the batch validation layer separately verifies all performed computations. This layer is composed of independent batch validators, which are stateless and have to download the required data before performing verification. In order to support this, every real-time cluster is required to upload the history of received transactions and performed state transitions to Swarm. Because Tendermint organizes transactions into blocks that each carry the hash of the state obtained after the previous block execution, real-time clusters upload transactions to Swarm in blocks as well.

Later on, batch validators replay fragments of transaction history, which are composed of one or more blocks, and challenge state transitions that they have deemed incorrect through the dispute resolution layer. If one of the state transitions is not correct, it takes only a single honest validator to challenge this and penalize the real-time cluster that performed the transition.

Developers do not have any control over batch validators beyond deciding how much budget is carved out for batch validation – i.e., how many batch validations should happen for the fragment of transaction history once it is uploaded to Swarm. Furthermore, the batch validator that verifies any specific history fragment is chosen randomly out of all batch validators in the network in order to prevent possible cartels.

Batch validators compact the transaction history and reduce Swarm space usage by uploading intermediate state snapshots to Swarm. Once a transaction history fragment has been verified a sufficient number of times, it is dropped, leaving only the corresponding snapshot.

Dispute resolution layer

We have already mentioned that batch validators are able to dispute state transitions. This ability is not exclusive to batch validators: a real-time worker can submit a dispute if it disagrees with another real-time worker on how the state should be updated. However, such disputes normally arise only between workers that belong to the same cluster – other real-time workers simply do not carry the required state.

No matter which node has submitted the dispute, it is resolved with the aid of an external authority. The Fluence network uses a specially developed Ethereum smart contract named Arbiter as this authority. Because Ethereum is computationally bounded and thus unable to repeat the entire computation to verify state transitions, a verification game mechanism is used to find the first WebAssembly instruction that produced the diverging states. Only this instruction with the relevant portion of the state is then submitted to the Arbiter contract, which then makes its final decision as to which node performed the incorrect state transition.

Every node in the network is required to put down a significant security deposit before performing computations. If it is found that a node has behaved incorrectly, its deposit is slashed. Assuming that potential adversaries are financially restricted, this reduces the number of cases where a client might receive an incorrect response.

Data availability layer

The Swarm receipts mechanism is used to make sure that the fragments of transaction history uploaded by the real-time clusters do not disappear before the batch validators replay and verify them. The Swarm receipt is a confirmation from the Swarm node that it is responsible for the specific uploaded data. If the Swarm node is not able to return the data when requested, its deposit is slashed, which prevents Swarm from losing potentially incriminating data.

Secure metadata storage

Deposits placed by Fluence network nodes, Swarm receipts issued for transaction history fragments, and metadata entries related to the batch validation and real- time cluster compositions are stored in the Ethereum blockchain. For the sake of simplicity in this paper, we will assume that the Arbiter contract holds this data in addition to its dispute resolution responsibilities.


To understand the reason behind having two noticeably different layers, we need to recall their properties. Real-time workers are stateful, which considerably improves response latencies because they do not have to download the required state data to perform computations. As an example, assume that we are building a decentralized SQL database that should support an indexed access to data. Complex queries such as the one listed below often require traversal of multiple indices, which are often implemented as B-trees.

    DATE(ts) AS date,
    AVG(gas_price * gas_used) AS tx_cost
FROM transactions tx WHERE IN (SELECT address
              FROM contracts
              WHERE name = ’CryptoDolphins’) 

Example query to blockchain data.

To traverse a B-tree, we need to sequentially fetch its nodes that satisfy the query conditions. It is not possible to retrieve the required B-tree nodes all at once because the next node to fetch can be determined only by matching the parent B-tree node against the query. If an index is stored externally in a decentralized storage such as IPFS or Swarm, this means that multiple network roundtrips must be performed between the machine performing the computations and the data storage, which significantly increases latency.

Other algorithms, especially those requiring irregular access to data, could benefit from storing their data locally as well. However, data locality significantly increases the economic barrier to joining a real-time cluster because a worker willing to participate in the cluster has to download the current state first. Consequently, this motivates workers to remain in the cluster and thus cluster compositions do not change much over time.

This means that malicious real-time workers in the cluster might form a cartel to produce incorrect results. Without batch validation, every node in the real-time cluster knows it will be verified only by its peers, which are known in advance because clusters are tightly connected. Consequently, malicious nodes can, for example, exploit the following strategy: use a special handshake to recognize other malicious nodes in the cluster and start producing incorrect results if they account for at least \(\frac{2}{3}\) of the total number of nodes (so they can reach BFT consensus without talking to the rest of the cluster); otherwise, work honestly. This strategy is virtually impossible to catch without external verification.

Furthermore, because real-time clusters are supposed to be small enough to be cost-efficient, the probability that malicious nodes will take over a cluster is significant. For example, for a network where 10% of all nodes are malicious, a real-time cluster that consists of 7 workers independently sampled from the network has approximately a 1.8 · 10−4 chance to have at least \(\frac{2}{3}\) malicious nodes.

To counteract this, the batch validation layer provides external verification. Nodes performing batch validation are chosen randomly, which means real-time nodes do not know beforehand which validator will be verifying them and thus cannot collude with the validator in advance.

Batch validation also decreases the probability that an intentional mistake made by a real-time cluster will never get noticed. Assume that in the same network where 10% of all nodes are malicious, we spun a real-time cluster of 4 workers and allocated a budget for 3 batch validations.

In this setup, a mistake can go unnoticed only if malicious actors comprise at least \(\frac{2}{3}\) of the realtime workers and all of the batch validators that verified the transaction history. The chance of this happening is ≈ 3.7 · 10−6, which is two orders of magnitude less than in the case where the entire budget was spent on the real-time cluster only.

We also expect that in the presence of batch validators, the fraction of malicious nodes in the network will drop significantly below 10%, because every time a malicious action is caught, the node that performed it loses its deposit and thus leaves the network.

Incentive model

Fluence uses a concept similar to Ethereum gas to track computational efforts. With few exceptions, every WebAssembly instruction has a predefined associated cost, which is named fuel to avoid confusion with Ethereum gas. The fuel required to perform a computation is roughly proportional to the total sum of fuel amounts assigned to instructions in the computation execution trace.

When it comes to storage usage accounting, Fluence rules differ significantly from Ethereum. Ethereum instructions for interacting with persistent storage require the client to pay a one-time fee – which is significant – to compensate for a certain amount of future expenses that will be incurred in storing the information. For example, SSTORE (the instruction that is used to save a word to the persistent storage) costs 20,000 gas, which is a few orders of magnitude more than the cost of basic instructions such as POP, ADD, or MUL, which require 2–5 gas.

While this approach has been working fairly well for Ethereum, we think that adapting it to the Fluence network – the aim of which is achieving cost-efficiency comparable to traditional clouds – is problematic. In conventional backend software, it is common to update an on-disk or in-memory state without worrying that the performed update costs considerably more than other operations. In order to easily port existing software to Fluence, we need to provide developers a method that does not require them to fundamentally modify the code being ported.

Another reason to reconsider storage accounting is that execution of WebAssembly instructions and provision of data storage are quite different. If we say that network nodes are compensated for the performed work, then the total difficulty of processed instructions indeed defines an amount of the work performed. However, allocating a megabyte of storage is not work – it is power. Only after a node has kept a megabyte of data in storage for a certain time can we estimate how much work it has performed: work = power · time.

When a client pays a one-time upfront fee to upload their data, there is no way for the network node responsible for its storage to know how long the data will be stored. No matter how large the upfront fee is, it is possible that expenses required to store the data will exceed this fee, leaving the financial burden on the node. This means that a different storage accounting approach must be developed for the Fluence network, which we propose and discuss below.

Rewards accounting. To counteract the aforementioned issues, various storage rent fees were proposed for Ethereum, including requiring clients to pay a fee to renew their storage every time they issue a transaction. However, to bring the developer experience as close as possible to traditional backend software, in the Fluence network, the developer is the only party finally responsible for compensating network nodes.

Fluence nodes are compensated for the computational difficulty of executed WebAssembly instructions and for the storage space allocated for a specific period of time. Because different hardware might need different time to execute the same program, computational difficulty is used as a sub- stitution for time. In other words, once a block of client transactions is processed and the fuel φ required to process it is counted, this fuel is transformed into the standard time tstd by multiplying it by the network-wide scaling constant ctime/fuel:

tstd = ctime/fuel · φ

To estimate the total node reward υ, two more scaling constants are introduced: cυ/fuel converts spent units of fuel into the network currency; cυ/spacetime does the same with the unit of storage space allocated for the unit of time. does the same with storage space per time. Assuming that the size of the allocated storage space is denoted by ω, the total node reward is computed as:

υ = cυ/fuel · φ + cυ/spacetime · ω · tstd

It should be noted that contrary to the system used by Ethereum where a client is able to choose a different gas price for every transaction, the scaling constants ctime/fuel, cυ/fuel, and cυ/spacetime are fixed for the entire Fluence network. One reason for this design is that batch validators are selected randomly and are not able to choose the computations they are going to verify.

By allowing clients or developers to choose their compensation level, batch validators might be forced to perform complex computations for an unreasonably low reward. To prevent this, scaling constants are periodically updated, similar to how mining difficulty changes in Ethereum. If there is not enough supply, the network-wide compensation level increases; conversely, if there is not enough demand, the compensation level drops.

Dummy transactions. Because time is counted only for performed computations, simply storing the state without processing transaction blocks does not ensure any compensation to real-time workers. Therefore, in cases where incoming client transactions are rare, it is possible that the block creation rate will be low and thus low-demand backends will spend lots of time storing the state between the blocks. This time will never be compensated; workers running such low-demand backends might spend far more resources to store the state than their total compensation will be.

To offset this, real-time workers are allowed to send dummy transactions to themselves; the fuel required to process such dummy transactions is accounted for in the same way as the fuel required to process client transactions. This way, even if the client transaction volume is low, real-time workers will be compensated proportionally to the (real-world) time they have been running a certain backend deployment.

Batch validators, however, are not affected by this issue because they do not have to wait for incoming transactions and new blocks. A batch validator replays a fragment of transaction history at the maximum rate it is able to perform; once it completes the processing of the fragment, it moves to the next fragment. Additionally, for the same amount of work (which is defined by used fuel), real-time workers and batch validators are compensated evenly, which makes both options equally attractive to miners. Therefore, no special mechanism to recompense batch validators exists in the Fluence network.

Because different hardware can process transactions at different rates, it might happen that a very fast real-time cluster will be able to produce and process dummy transactions so fast that the compensation from the developer will become unexpectedly high. To mitigate this, in addition to being able to set the size of the storage space ω, developers have the ability to set the maximum fuel amount φmax that real-time workers are allowed to spend per unit of time.

This allows a developer to budget how much will be spent on computations performed by the Fluence network in the next day, week, or month. Additionally, it lets real-time workers plan how much capacity they should allocate for transaction processing performed by the backend deployed by that developer.

We should note that it is possible for ill-disposed real-time workers to process only self-created dummy transactions – and none sent by clients. While we do not discuss a detailed mechanism to combat such behavior in this paper, we should note that a developer monitoring the use of the deployed backend will be able to notice a drop in the ratio between client and dummy transactions. In this case, the developer can either replace misbehaving real-time workers with other network nodes, or reduce the amount of fuel φmax that the real-time cluster is allowed to spend per unit of time.

Client billing. As we have previously mentioned, developers are exclusively responsible for paying out rewards to the Fluence network nodes. Because miners’ compensation is proportional to used fuel and allocated storage space, developers are directly incentivized to write efficient backends.

Generally, clients are not responsible for making any payments to network nodes. Nevertheless, different external monetization schemes are possible for reimbursing developers. The Fluence network does not prescribe an exact monetization scheme; however, it might provide some of the most common schemes through extension packages.

For example, one developer might allow only those clients from a whitelist to interact with the deployed backend, charging a flat rate to add a client to that whitelist; another developer might charge clients a fixed fee per each submitted transaction. It might also be possible for clients to pay no explicit fee while the developer uses their personal funds to cover the miners’ expenses.


This document will guide you through the three major steps of development with Fluence.

You will develop a two-tiered web application with the Rust backend and the JavaScript frontend. While the architecture of this application resembles a conventional centralized one, the backend will get magically decentralized and will run on top of the Fluence network.

First, you will use the Fluence Rust SDK to develop the backend and compile it to a WebAssembly package. After that, to deploy the obtained backend package to the Fluence network, you will upload it to Swarm and publish the Swarm reference to the Fluence smart contract. Finally, you will build the frontend which will interact with the backend running in the Fluence network.

Should you have any questions, feel free to join our Discord!

Developing the backend app

Fluence runs WebAssembly programs, so it is possible to build a Fluence backend in any language that targets WebAssembly. In this guide we will use Rust as a language of choice.

First you will build a hello world Rust app, adapt it to Fluence, and then compile it to WebAssembly.

Setting up Rust

Let's get some Rust!

Install the Rust compiler:

# installs the Rust compiler and supplementary tools to `~/.cargo/bin`
~ $ curl -sSf | sh -s -- -y
info: downloading installer
Rust is installed now. Great!
To configure your current shell run source $HOME/.cargo/env

Let's listen to the installer and configure your current shell:
(new shell environments should pick up the right configuration automatically)

~ $ source $HOME/.cargo/env
<no output>

After that, we need to install the nighly Rust toolchain:
(Fluence Rust SDK requires the nightly toolchain due to certain memory operations)

~ $ rustup toolchain install nightly
info: syncing channel updates ...
  nightly-<arch> installed - rustc 1.34.0-nightly (57d7cfc3c 2019-02-11)

Let's check that the nightly toolchain was installed successfully:

~ $ rustup toolchain list | grep nightly
# the output should contain the nighly toolchain

To compile Rust to WebAssembly, we also need to add the wasm32 compilation target:

# install target for WebAssembly
~ $ rustup target add wasm32-unknown-unknown --toolchain nightly
info: downloading component 'rust-std' for 'wasm32-unknown-unknown'
info: installing component 'rust-std' for 'wasm32-unknown-unknown'

Finally, let's check that everything was set up correctly and compile a sample Rust code:

# create a simple program that always returns 1
~ $ echo "fn main(){1;}" >

# compile it to WebAssembly using rustc from the nightly toolchain
~ $ rustup run nightly rustc --target=wasm32-unknown-unknown
<no output>

# check that the test.wasm output file was created
~ $ ls -lh test.wasm
-rwxr-xr-x  1 user  user   1.4M Feb 11 11:59 test.wasm

If everything looks similar, then it's time to create a Rust hello-world project!

Creating an empty Rust package

Let's create a new empty Rust package:

# create empty Rust package
~ $ cargo +nightly new hello-world --edition 2018
Created binary (application) `hello-world` package

# go to the package directory
~ $ cd hello-world
~/hello-world $

More info on creating a new Rust project can be found in the Rust Cargo book.

[Optional] Creating a Hello World Rust application

If you are already familiar with Rust, feel free to skip this section.

Let's write some code: our backend should receive a user name from the program input, and then print a greeting.

Take a look at src/

~/hello-world $ cat src/

You will see the following code, which should be there by default and almost does what we need:

fn main() {
    println!("Hello, world!");

Open src/ in any editor, delete all existing code, and paste the following:

use std::env;

fn greeting(name: String) -> String {
    format!("Hello, world! -- {}", name)

fn main() {
    let name = env::args().nth(1).unwrap();
    println!("{}", greeting(name));

This code:

  1. defines the greeting function which takes a name and returns a greeting message
  2. defines the main function which reads the first argument, passes it to the greeting function, and prints the returned result

Let's now compile and run our example:

~/hello-world $ cargo +nightly run MyName
   Compiling hello-world v0.1.0 (/root/hello-world)
    Finished dev [unoptimized + debuginfo] target(s) in 0.70s
     Running `target/debug/hello-world MyName`
Hello, world! -- MyName

WARNING! If you see the following error, you should install gcc and try cargo +nightly run again:

Compiling hello-world v0.1.0 (/root/hello-world)
error: linker cc not found
  = note: No such file or directory (os error 2)

error: aborting due to previous error
error: Could not compile hello-world.

Now that we have a working Hello World application, it's time to adapt it for Fluence.

Creating a Hello World backend for Fluence

For a backend to be compatible with the Fluence network, it should follow few conventions to let Fluence nodes run your code correctly. To reduce the amount of boilerplate code, we have developed the Rust SDK. Let's see how to use it.

Adding Fluence as a dependency

First you need to add the Fluence Rust SDK to as a dependency.
Let's take a look at Cargo.toml:

~/hello-world $ cat Cargo.toml

It should look like this:

name = "hello-world"
version = "0.1.0"
authors = ["root"]
edition = "2018"


Now, open Cargo.toml in the editor, and add fluence to dependencies:

name = "hello-world"
version = "0.1.0"
authors = ["root"]
edition = "2018"

fluence = {version = "0.0.11"}

Implementing the backend logic

Create and open ~/hello-world/src/ in the editor and paste the following code there:

# #![allow(unused_variables)]
#fn main() {
use fluence::sdk::*;

fn greeting(name: String) -> String {
    format!("Hello, world! From user {}", name)

This code imports the Fluence SDK, and marks the greeting function with the #[invocation_handler] macro.

The function marked with the #[invocation_handler] macro is called a gateway function. It is essentially the entry point to your application: all client transactions will be passed to this function, and once it returns a result, clients can read this result.

Gateway functions are allowed to take and return only String or Vec<u8> values – check out the SDK overview for more information.

Making it a library

For the gateway function to be correctly exported and thus available for Fluence, the backend should be compiled to WebAssembly as a library.

To make the backend a library, open Cargo.toml in the editor, and add the [lib]section:

name = "hello-world"
version = "0.1.0"
authors = ["root"]
edition = "2018"

name = "hello_world"
path = "src/"
crate-type = ["cdylib"]

fluence = { version = "0.0.11"}

Compiling to WebAssembly

To build the .wasm file, run this from the application directory:
(note: downloading and compiling dependencies might take a few minutes)

~/hello-world $ cargo +nightly build --lib --target wasm32-unknown-unknown --release
    Updating index
    Finished release [optimized] target(s) in 1m 16s

If everything goes well, you should get the .wasm file deep in the target directory.
Let's check it:

~/hello-world $ ls -lh target/wasm32-unknown-unknown/release/hello_world.wasm
-rwxr-xr-x  2 user  user  1.4M Feb 11 11:59 target/wasm32-unknown-unknown/release/hello_world.wasm

Publishing the backend app

In the Fluence network, applications are deployed by uploading WebAssembly code to Swarm, and publishing hashes of the uploaded code to the Fluence smart contract.

It is also possible to specify the desired cluster size, which sets the required number of real-time workers in the cluster hosting the application. Note that the application might wait in the queue until there are enough free workers to form a cluster of the desired size.

Connecting to Swarm and Ethereum Rinkeby nodes

To make sure we're on the same page:

  • Swarm is a decentralized file storage
  • Ethereum Rinkeby is one of Ethereum testnets, which works with toy money
  • Fluence smart contract is what rules the Fluence network

To upload the application code to Swarm, you need to have access to one of Swarm nodes. The same with Ethereum: you need access to any Ethereum node running Rinkeby testnet.

For your convenience and to make this guide simpler, we use Ethereum and Swarm nodes set up by Fluence Labs, but you can use any other nodes if you wish.

WARNING! This is not a secure way to connect to Ethereum or Swarm.
It should not be used in production or in a security-sensitive context.

Registering an Ethereum Rinkeby account


Go to, select any Rinkeby in the upper right dropdown, enter any password, and download the Keystore file. You will find your account address in the last part of the Keystore file name, for example:


Top up your account with funds

There are two main Rinkeby faucets.

This one gives you up to 18 Ether, but it requires you to post an Ethereum address to a social network.

Another one gives you ETH right away, but just 0.001 Ether, which isn't enough for the publishing, so you may try to top up several times.

Installing the Fluence CLI

It is hard to send publication transactions manually, so we provide the Fluence CLI.
You can download the CLI from the releases page, or fetch it in the terminal:


curl -L -o fluence


curl -L -o fluence

Don't forget to add permissions to run it:

chmod +x ./fluence

# check that the CLI is working
./fluence --version
Fluence CLI 0.1.5

Publishing the application with the Fluence CLI

As we have already mentioned, you need to have access to the Ethereum Rinkeby and Swarm networks. You can either use Ethereum and Swarm nodes set up by Fluence Labs, or specify other nodes by providing their URIs using --eth_url and --swarm_url options.

You also need a Rinkeby account with some money on it (you can get Ethers from faucet) and its private key, which can either be a hex string or a Keystore file.

To interact with the Fluence CLI, we will set it up first:

./fluence setup

This command will ask you to enter the Fluence contract address, Swarm and Ethereum node addresses, and, finally, your account credentials. It will create the config file which will be used by the CLI tool in the future.

By default, Swarm and Ethereum nodes controlled by Fluence Labs will be used. Note that you need to provide either the secret key or the Keystore file path + password to be able to send transactions to Ethereum.

Having all that, now you are ready to publish your application:

./fluence publish \
            --code_path        ~/hello-world/target/wasm32-unknown-unknown/release/hello_world.wasm \
            --gas_price        10 \
            --cluster_size     4 \
            --wait_syncing \

Once the command completes, you should see an output similar to the following:

[1/3]   Application code uploaded. ---> [00:00:00]
swarm hash: 0xf5c604478031e9a658551220da3af1f086965b257e7375bbb005e0458c805874
[2/3]   Transaction publishing app was sent. ---> [00:00:03]
  tx hash: 0x5552ee8f136bce0b020950676d84af00e4016490b8ee8b1c51780546ad6016b7
[3/3]   Transaction was included. ---> [00:02:38]
App deployed.
  app id: 2
  tx hash: 0x5552ee8f136bce0b020950676d84af00e4016490b8ee8b1c51780546ad6016b7

Verifying the application status

To check the state of your application – for example, which nodes it was deployed to, run:

./fluence status \
            --app_id           <your app id here>

The output will be in JSON, and should look similar to the following:

  "apps": [
      "app_id": "<your app id here>",
      "storage_hash": "<swarm hash>",
      "storage_receipt": "0x0000000000000000000000000000000000000000000000000000000000000000",
      "cluster_size": 4,
      "owner": "<your ethereum address>",
      "pin_to_nodes": [],
      "cluster": {
        "genesis_time": 1549353504,
        "node_ids": [
  "nodes": [
      "validator_key": "0x5ed7a87da4bd800cd4f5b440f36ccece9c9e4542f9808ea6bfa45f84b8198185",
      "tendermint_p2p_id": "0x6c03a3fe792314f100ac8088a161f70bd7d257b1",
      "ip_addr": "",
      "api_port": 25000,
      "capacity": 10,
      "owner": "0x5902720e872fb2b0cd4402c69d6d43c86e973db7",
      "is_private": false,
      "app_ids": [
    "<3 more nodes here>"

The backend application should be successfully deployed now!

Developing the web app

For this part, you need installed npm. Please refer to npm docs for installation instructions.

Preparing web application

Let's clone a simple web app template:

~ $ git clone
~ $ cd frontend-template
~/frontend-template $ 

Inside you should find:

  • package.json which adds required dependencies
  • webpack.config.js which is needed for the webpack to work
  • index.js which demonstrates how to interact with the real-time cluster

The template web application uses the Fluence frontend SDK. This SDK allows to locate the real-time cluster with the help of the Fluence smart contract, and then send transactions to this cluster.

Let's take a look at index.js:

// the address of the Fluence smart contract on Ethereum
let contractAddress = "0x074a79f29c613f4f7035cec582d0f7e4d3cda2e7";

// the address of the Ethereum node
// MetaMask is used to send transactions if this address is set to `undefined`, 
let ethUrl = "";

// the backend appId as seen in the Fluence smart contract
let appId = "6";


// create a session between the frontend client and the backend application
// the session is used to send transactions to the real-time cluster
fluence.connect(contractAddress, appId, ethUrl).then((s) => {
  console.log("Session created");
  window.session = s;
  helloBtn.disabled = false;


// set a callback on the button click
helloBtn.addEventListener("click", send)

// send a transaction with the name to the real-time cluster and display the response
function send() {
  const username = usernameInput.value.trim();
  let result = session.invoke(username);
  getResultString(result).then(function (str) {
    greetingLbl.innerHTML = str;

Running the web application

Make sure that you have changed the appId variable to the identifier of the deployed backend!

To install dependencies, and compile and run your web application, run:

~/frontend-template $ npm install
~/frontend-template $ npm run start
> [email protected] start /private/tmp/frontend-template
> webpack-dev-server

ℹ 「wds」: Project is running at http://localhost:8080/

Now you can open http://localhost:8080/ in your browser. You should see an input text element and a disabled button, which should become enabled once the session with the backend is initialized.

You can also open the developer console, and check out the Fluence SDK logs:

Connecting web3 to
Session created

You can also interact with the backend application from the developer console:

let result = session.invoke("MyName");
Hello, world! -- MyName


Thanks for finishing the quickstart guide!

A short recap of what you have developed and learned:

  • How to set up Rust and Node.js environments from scratch.
  • How to interact with Ethereum and Swarm through the Fluence CLI.
  • How to spin up a decentralized backend running in the Fluence network, and build frontend applications interacting with this backend.

Hope this was fun!

Join our Discord if you have any feedback, questions, or ideas what could be built with Fluence :)

Backend guide

Backend applications deployed to Fluence nodes are usually composed of two logical parts. The first part is the domain logic code produced by a developer. The second part is the Fluence SDK, which is responsible for accepting transactions, finding the right domain function to call, invoking this function and returning results back to the state machine.

It is not necessary for a developer to use Fluence SDK – it merely exists to make development more convenient. However, there are certain rules which the code running within WebAssembly VM is expected to follow, so if you want to build unpadded applications or implement your own SDK please consult the following document.

If you would like to learn how to build backend applications using the Fluence SDK, please visit the SDK overview.

Backend SDK overview

Fluence backend SDK consists of two crates: main and macro. The main crate is used for all memory relative operations and logging, while the macro crate contains the macro to simplify entry point functions. These crates can be used separately but the preferred way is to use the global fluence crate which combines all the others.

In Rust 2018 this can be done by adding Fluence SDK as a dependency, and then adding use fluence::sdk::* to Rust sources.

Rust 2015

To use Fluence SDK with Rust 2015 import it like this:

extern crate fluence;

use fluence::sdk::*;

Example Rust 2015 application can be found here.

Entry point function

Each WebAssembly backend application deployed to the Fluence network is expected to provide a single entry point function named invoke. The easiest way to implement this function is to use the invocation_handler macro provided by the Fluence SDK:

use fluence::sdk::*;

fn greeting(name: String) -> String {
    format!("Hello, world! -- {}", name)

If anything goes wrong, cargo expand can be used for troubleshooting and macros debugging. Keep in mind that the function which the macro is attached to should:

  • not have more than one input argument and always return a value
  • not be unsafe, const, generic, or have custom abi linkage or variadic params
  • have the input argument type (if present) and return type to be either String or Vec<u8>
  • not use the invoke name, which is reserved by the Fluence SDK

The invocation_handler macro additionally allows to specify the initialization function:

use fluence::sdk::*;

fn init() {
    // will be called just before the first `greeting()` function invocation

#[invocation_handler(init_fn = init)]
fn greeting(name: String) -> String {
    format!("Hello, world! -- {}", name)

The initialization function will be called only once and before any other code when the first transaction arrives. Therefore, it is a great place to put the code preparing the backend application.