Writing parallel and distributed programs is often challenging and requires a lot of time to deal with concurrency issues. Actor model provides a high-level, scalable and robust abstraction for building distributed applications. It provides several benefits:
- Scalability: Actors easily scale across nodes. The asynchronous, non-blocking nature of actors allows them to handle huge volumes of concurrent tasks efficiently.
- Concurrency: The actor model abstracts over concurrency, allowing developers to avoid raw threads and locks.
- Modularity: An actor system decomposes naturally into a collection of actors that can be understood independently. Actor logic is encapsulated within the actor itself.
Xoscar implements the actor model in Python and provides user-friendly APIs that offer significant benefits for building applications on heterogeneous hardware:
- Abstraction over low-level communication details:Xoscar handles all communication between actors transparently, whether on CPUs, GPUs, or across nodes. Developers focus on application logic rather than managing hardware resources and optimizing data transfer.
- Flexible actor models:Xoscar supports both stateful and stateless actors. Stateful actors ensure thread safety for concurrent systems while stateless actors can handle massive volumes of concurrent messages. Developers choose the appropriate actor model for their needs.
- Batch method:Xoscar provides a batch interface to significantly improve call efficiency when an actor interface is invoked a large number of times.
- Advanced debugging support:Xoscar can detect potential issues like deadlocks, long-running calls, and performance bottlenecks that would otherwise be nearly impossible to troubleshoot in a heterogeneous environment.
- Automated recovery:If an actor fails for any reason, Xoscar will automatically restart it if you want. It can monitor actors and restart them upon failure, enabling fault-tolerant systems.
Xoscar allows you to create multiple actor pools on each worker node, typically binding an actor pool to a CPU core or a GPU card. Xoscar provides allocation policies so that whenever an actor is created, it will be instantiated in the appropriate pool based on the specified policy.
When actors communicate, Xoscar will choose the optimal communication mechanism based on which pools the actors belong to. This allows Xoscar to optimize communication in heterogeneous environments with multiple processing units and accelerators.
Binary installers for the latest released version are available at thePython Package Index (PyPI).
#PyPI
pip install xoscar
The source code is currently hosted on GitHub at:https://github /xorbitsai/xoscar.
Building from source requires that you have cmake and gcc installed on your system.
- cmake >= 3.11
- gcc >= 8
#If you have never cloned xoscar before
git clone --recursive https://github /xorbitsai/xoscar.git
cdxoscar/ Python
pip install -e.
#If you have already cloned xoscar before
cdxoscar
git submodule init
git submodule update
cdPython&&pip install -e.
Here are basic APIs for Xoscar.
importxoscarasxo
# stateful actor, for stateless actor, inherit from xo.StatelessActor
classMyActor(xo.Actor):
def__init__(self,*args,**kwargs):
pass
asyncdef__post_create__(self):
# called after created
pass
asyncdef__pre_destroy__(self):
# called before destroy
pass
defmethod_a(self,arg_1,arg_2,**kw_1):# user-defined function
pass
asyncdefmethod_b(self,arg_1,arg_2,**kw_1):# user-defined async function
pass
importxoscarasxo
actor_ref=awaitxo.create_actor(
MyActor,1,2,a=1,b=2,
address='<ip>:<port>',uid='UniqueActorName')
importxoscarasxo
actor_ref=awaitxo.actor_ref(address,actor_id)
# send
awaitactor_ref.method_a.send(1,2,a=1,b=2)
# equivalent to actor_ref.method_a.send
awaitactor_ref.method_a(1,2,a=1,b=2)
# tell, it sends a message asynchronously and does not wait for a response.
awaitactor_ref.method_a.tell(1,2,a=1,b=2)
Xoscar provides a set of APIs to write batch methods. You can simply add a@extensible
decorator to your actor method
and create a batch version. All calls wrapped in a batch will be sent together, reducing possible RPC cost.
importxoscarasxo
classExampleActor(xo.Actor):
@xo.extensible
asyncdefbatch_method(self,a,b=None):
pass
Xoscar also supports creating a batch version of the method:
classExampleActor(xo.Actor):
@xo.extensible
asyncdefbatch_method(self,a,b=None):
raiseNotImplementedError# this will redirect all requests to the batch version
@batch_method.batch
asyncdefbatch_method(self,args_list,kwargs_list):
results=[]
forargs,kwargsinzip(args_list,kwargs_list):
a,b=self.batch_method.bind(*args,**kwargs)
# process the request
results.append(result)
returnresults# return a list of results
In a batch method, users can define how to more efficiently process a batch of requests.
Calling batch methods is easy. You can use<method_name>.delay
to make a batched call and use<method_name>.batch
to send them:
ref=awaitxo.actor_ref(uid='ExampleActor',address='127.0.0.1:13425')
results=awaitref.batch_method.batch(
ref.batch_method.delay(10,b=20),
ref.batch_method.delay(20),
)