Responsive Image

Zero-configuration message router and event store

Why Axon Server?

 

an ! and ? symbols inside different blue circles..Purpose-built for Axon Framework applications

More traditional forms of integration haven't been designed with the specifics of Axon Framework, or more generally, CQRS/ES architectures in mind. When you're using such general-purpose integration techniques, you'll find yourself in a constant effort to make everything work properly and efficiently. For instance, plain HTTP/REST calls are commonly used in microservices architectures but have very clear drawbacks: they're verbose and slow, synchronous in nature, while load balancing is possible on the HTTP level, consistent routing based on aggregate id is complicated. Another example is problems associated with using an RDBMS table as an event store: you may be fighting global sequence number gaps, delays in event processing due to polling, and scalability issues related to the number of events. Axon Server has been designed from the beginning to support the CQRS/ES architectural style, implementing optimized routing and storage algorithms.

logo for connectionsClient-initiated connections

Axon Server uses HTTP/2 for its connections, specifically Google's gRPC protocol, which adds a binary Protobuf-based RMI layer on top of HTTP/2. This is a very efficient protocol supporting two-way communications. Importantly, in the Axon Server model, all connections are initiated by the client. After the connection has been established, the client can send messages to Axon Server and vice versa. This approach has a clear benefit: the only service location that has to be managed is the location of Axon Server, which must be made known to the application clients. Other than that, no discovery or registration mechanisms are needed: Axon Server will be able to reach the clients simply because the clients reach out to Axon Server.

blue magnifier logoHandler-aware

Your Axon Framework-based application will have several @CommandHandler, @EventHandler, and/or @QueryHandlers methods. Axon Framework registers those with whatever implementation of the Axon buses is present. If you're using the Axon Server version of the buses, this is taken one step further: the Axon Server client will be relaying the information about the handlers present in this application to Axon Server. As a result, Axon Server knows which components are available to handle which types of messages. This allows it to configure routing fully automatically.

logo for integrationEasy to Integrate

Axon Server can be easily integrated with your applications in some ways. Starting from Axon 4, Axon Server integration is enabled by default in Axon Framework, and all you need to configure is the network location of Axon Server. For Axon 3, or non-Axon applications, there's an open-source Axon Server client (available on GitHub and Maven Central). It offers drop-in implementations of Axon Framework's CommandBus, EventBus/EventStore, and QueryBus interfaces. If you're using Spring Boot, you can use the axonhub-spring-boot-autoconfigure dependency, which will wire these buses automatically. If you need more freedom, like using AxonHub from another programming platform, you could communicate with AxonHub directly via the open HTTP+JSON and gRPC interfaces.

logo for distributionDistributed as a JAR File

Axon Server is distributed as a stand-alone jar file that you can run in any environment you like, on-premise, or in the cloud. There are two versions of the jar file: the free Axon Server and the commercial Axon Server Enterprise, which requires a license key to use. Axon Server can be configured to run as a messaging platform (previously called “AxonHub”), as an event store (previously called “AxonDB”), or both at the same time. Axon Server can run well directly on a (virtual) machine but has also been tested extensively in a Docker/Kubernetes containerized setting.

Fault tolerance imageFault tolerance (requires license)

Axon Server Enterprise uses the Raft Consensus Algorithm to implement a fault-tolerant distributed system. This is a well-known, documented algorithm underpinning fault tolerance in platforms such as Kubernetes and Cloud Foundry. It can handle a wide range of network and host failure scenarios while keeping the cluster available to clients and preserving integrity. Something that is verified every day in AxonIQ's testing infrastructure.

Image for storageQuorum-Based Storage (requires license)

When using Axon Server Enterprise, transactions will not be confirmed to the client until most of the nodes have confirmed the transaction individually. Combined with spreading the cluster nodes across data centers ("availability zones" when using a public cloud provider), this gives powerful protection against data loss.

Image for leader/replica architecture Leader/Replica Architecture (requires license)

For event storage, Axon Server Enterprise has been designed with a leader/replicas clustering architecture (as opposed to the peer-to-peer architecture as implemented in, e.g., Cassandra). At any point in time, a single node is the leader and has the responsibility to verify transactions for consistency and integrity. The replicas will store a copy of the transaction to guarantee durability. When the leader becomes unavailable, a leader election protocol will ensure that a new node takes up this responsibility. For the event sourcing use case, the leader/replica architecture is the most efficient implementation of a reliable, fault-tolerant cluster.

image for horizontal scalability Horizontal Scalability (requires license)

Even though Axon Server Enterprise uses a single-leader concept, many mechanisms allow for horizontal scalability. First of all, the single-leader mechanism is only used where needed, which is for event storage. For routing of command and query messages, an Axon Server Enterprise cluster functions in a peer-to-peer fashion. Also, the leader role for storing events is assigned at the context level rather than the cluster level. This means that when using multiple contexts, different nodes will take up the leadership role for different contexts, which also balances the cluster load.

Image for Storage per context Storage Space per Context (requires license)

In Axon Server Enterprise, each context has its own storage space on a disk (a directory). You’ll find the standard Axon Server storage structure in this directory: event and snapshot segments with the associated index files. This clear separation gives a lot of flexibility in managing different contexts in different ways, such as retention periods, encryption, backup policies, and a storage medium (fast, expensive SSDs or slower, cheaper HDDs).