Spaces

XOOM Spaces uses our cluster-based Grid as both a distributed object storage and a means of object exchange within a single scaled microservice (i.e. Bounded Context).

XOOM Spaces facilitates local and distributed object storages and exchanges. A distributed space can exist over a number of clustered XOOM Grid nodes, and is transactionally replicated for fail-safe access. A distributed space keeps frequently used data near the actor behaviors and processes that need it.

Such technologies are often referred to as data grids and data fabrics. Even so, the distributed cached data is only half of the story. The point is to use the distributed object storage for the business-driven operational purposes of the application.

Using Spaces

To start a local or distributed Spaces first you need to obtain an Accessor instance:

Accessor myAccessor = Accessor.using(grid, "myAccessor");

An Accessor instance is used to create either a local or a distributed space. Overloaded spaceFor(...) methods are used to create a local Space instance, whereas the distributedSpaceFor(...) overloaded methods are employed to create a distributed Space instance. It is important to mention here that local and distributed spaces are represented by the Space protocol, which in both cases is backed by an actor. When the Space is local only, the actor resides only on the one node. On the other hand, a distributed Space has a backing actor on a number of different clustered Grid nodes.

The current implementation of distributed spaces replicates all items of every space to all grid nodes. There is a replication transaction around all put()and take() operations. To avoid excessive overhead on larger clusters, transactions can be configured to require only a fraction of total node replication confirmations before they are considered fully committed. For example, you might use 0.25 for a replication factor, which means that 25% of total nodes must confirm the replication of a put() before the transaction is considered committed. Yet, after the limited replication is considered committed, the remainder of the replications will continue in the background until confirmed by all nodes.

Future releases will feature grid node replication to only a subset of all nodes. For example, a total replication to three out of nine cluster nodes could be considered fully replicated. This uses the reasoning that it is highly unlikely to (near) simultaneously lose all three nodes from the same cluster.

Space type

The Space protocol is the central component of Spaces API:

public interface Space {
  <T> Completes<KeyItem<T>> put(final Key key, final Item<T> item);
  <T> Completes<Optional<KeyItem<T>>> get(final Key key, final Period until);
  <T> Completes<Optional<KeyItem<T>>> take(final Key key, final Period until);
}

It contains the Space messages put(), get(), and take(). The get() and take() methods accept Period parameter. This value indicates how long the result will be to be awaited. Also any new Item added to a Space has a Lease property. This property defines the length of time the Item will be stored, which can be indefinitely. When a lease expires, the Item is evicted from the Space. The Period and Lease values work together.

The following table describes the full Space protocol.

Message
Description

put(Key key, Item<T> item)

Puts a new Item<T> that is referenced by the Key into the Space, or replaces an existing Item<T> in the Space matching Key with the given Item. The new Item and its Key are eventually answered as a KeyItem pair.

get(Key key, Period until)

Gets and eventually answers the Item identified by the Key from the Space. If the Key does not (yet) exist, a periodic query is run until the defined Period elapses or the Key is resolved, whichever occurs first. The Period may be None, Forever, or some other period of time between the two extremes. The found Item and its Key are eventually answered as an Optional of KeyItem pair; otherwise, if not found, an empty Optional is eventually answered.

take(Key key, Period until)

Takes the Item identified by the Key out from the Space by removing it. If the Key does not (yet) exist, a periodic query is run until the defined Period elapses or the Key is resolved, whichever occurs first. The Period may be None, Forever, or some other period of time between the two extremes. The found Item and its Key are eventually answered as an Optional of KeyItem pair; otherwise, if not found, an empty Optional is eventually answered.

A full example of using distributed Spaces with transactional write-through and full cluster-wide replication is available in this xoom-distributed-spaces example project. See the simplicity our our API in the following REST resource handler that makes direct use of distributed Spaces. Both the REST resource handler and the Spaces API are fully reactive, and yet completely fluent and understandable:

public class SpacesResource extends DynamicResourceHandler {

    private static final String accessorName = "distributed-accessor";
    private static final String spaceName = "distributed-space";

    public SpacesResource(final Stage stage) {
        super(stage);
    }

    private Completes<Response> get(String key) {
        Accessor accessor = Accessor.named((Grid) stage(), accessorName);
        if (accessor.isNotDefined()) {
            accessor = Accessor.using((Grid) stage(), accessorName);
        }

        Space space = accessor.distributedSpaceFor(spaceName);
        return space.get(new Key1(key), Period.None)
                .andThen(keyItem1 -> keyItem1
                        .map(keyItem2 -> Response.of(Response.Status.Ok, (String) keyItem2.object))
                        .orElse(Response.of(Response.Status.NotFound)));
    }

    private Completes<Response> put(SpaceData data) {
        Accessor accessor = Accessor.named((Grid) stage(), accessorName);
        if (accessor.isNotDefined()) {
            accessor = Accessor.using((Grid) stage(), accessorName);
        }

        Space space = accessor.distributedSpaceFor(spaceName);
        final Key1 key1 = new Key1(data.key);

        return space.put(key1, Item.of(data.value, Lease.Forever))
                .andThen(item -> Response.of(Response.Status.Ok));
    }

    private Completes<Response> delete(String key) {
        Accessor accessor = Accessor.named((Grid) stage(), accessorName);
        if (accessor.isNotDefined()) {
            accessor = Accessor.using((Grid) stage(), accessorName);
        }

        Space space = accessor.distributedSpaceFor(spaceName, 1, Duration.ofMillis(1_000));
        Completes<Optional<KeyItem<Object>>> takeCompletes = space.take(new Key1(key), Period.None);

        return takeCompletes
                .andThen(keyItem1 -> keyItem1
                        .map(keyItem2 -> Response.of(Response.Status.Ok, (String) keyItem2.object))
                        .orElse(Response.of(Response.Status.NotFound)));
    }

    @Override
    public Resource<?> routes() {
        return ResourceBuilder.resource("Distributed Spaces",
                ResourceBuilder.get("/spaces/{key}")
                        .param(String.class)
                        .handle(this::get),
                ResourceBuilder.post("/spaces")
                        .body(SpaceData.class)
                        .handle(this::put),
                ResourceBuilder.delete("/spaces/{key}")
                        .param(String.class)
                        .handle(this::delete));
    }
}

Try it for yourself. You can start with the xoom-distributed-spaces example project and make changes, and then build your own distributed data grid/fabric microservice.

Last updated