--------------------------------- Abstracting beanpole: --------------------------------- Plugins *MUST NOT* know if whether a computer is networked. That job is delegated for *one* particular module: glue.core. If a client connects to a server, and contains a hook which acts like a plugin, the backend should treat it exactly as it would with a another networked server, or even internally. Glue.core would handle the handshake / authentication. Likewise, connected servers using brazen must not handle communication with other servers differently than internally. For security sake, each channel must provide metadata identifying whether it's private, protected, or public. Channels default to private if it's absent. - Public channels are available to call without authentication - Protected channels are available to call only by other crusted servers / clients - Private channels can only be handled within the application. All metadata is handled by the parser / router. --------------------------------- Routing: --------------------------------- The routing mechanism should parse syntactic sugar to identify how a particular channel is handled. Each channel must be separated by backslashes to allow for parameters. This is primarily for future implementation where beanpole might support HTTP requests. '[type] [meta]* [channel_name]* [channel/:param] [additional]*' - type: the type of channel - push, pull, ??? - meta: information attached to channel help glue.core, and other handlers. - channel_name: the name of the channel. Private use from channel. Useful if channel needs to change, like a variable name. - channel: the physical channel separated/by/backslashes - additional: additional parameters specific to the channel type. on push application/ready: function(data){} 'push private -pull application/ready' on private pull of application ready passing through application exists: 'pull private application/exists -> application/ready': function(pull){ } Calling back: There needs to be a way for modules handling a particular request to identify where it's coming from. --------------------------------- Types of channels: --------------------------------- - pull - (getting) channels which respond to a particular request which maybe pushed out at a later time. This is from the source that makes the "push" with the same name. - push - (setting) channels which are pushed out are used after a particular change has occurred. This is one to many. Why I chose for push/pull to be different handlers with the same name: Inspiration was actually taken back in the day when I was developing in Actionscript (I know, stfu). Bindable metadata was a slick way of getting a particular property from an object, and then sticking to it for any changes. For beanpole, "pull" would be the action of getting the current value of a particular "pod" (object), and "push" is the method of sticking to it. Problems: How do we distinguish single pulls from multiple pulls? For instance, for multiple pulls, I may want to "pull" stats from all the servers, but for a single pull, I may want to register a particular queue, and *only* send it to one instance. Possible solutions: on('pull multi…') proxy.pull('multi…') is a different channel handing than on('pull…') proxy.pull('…') So technically, if I register proxy.on('pull get.name', …') proxy.on('pull multi get.name') there wouldn't be any complaining, since I can call only one, or the other. Where the first one can be the *only* one, and the second one can have multiple. SO proxy.pull('get.name') would be the first one, and proxy.pull('multi get.name') would be the second one. Multiple pulls also need to return to the requestor how many items are handling the request. This probably needs to take on a response, ondata, and end approach similar to node.js's streaming api Streaming: Without adding too much shit to the architecture, there may come a time where content is streamed to the particular requestor. This could be highly beneficial for programs which send files back and forth. Something which beanpole isn't necessarily equipped for at the moment. So, without changing the architecture, perhaps providing a method for supporting such a feature. For example: This could be used: proxy.pull('get.file','text.txt', { data: function(buffer) { }, end: function() { } }); or this could be used: proxy.pull('get.file','text.txt',function(body) { }); But what about pulling from multiple sources? proxy.pull('multi get.file', 'text.txt', function(source) //callback multiple times { source.on('data… }); Hmmmm… -------------------------------- Network Topology -------------------------------- The architecture, mainly glue.core must have the ability to manage connected servers. Many of which should only communicate to particular applications. For instance: Currently there are two applications on spice.io which register queue's to the queue app . Each application has a slave which handles the cue, but the queue must know which slaves to send to. So: - Each app must have an identifier shared amongst other apps it needs to communicate with (_appId) - Since queues are protected, the handshake to between the queue app and the requester must be authenticated. - The queue must receive the app id, and know what apps to send to via registered channels. What if there are dozens of slaves running across a cluster of servers? there *must* be a way for glue.core to use a load-balancing mechanism to send individual pulls to servers: round robin, least connected, etc. Stats from servers to see which is less busy? What if there are multiple singleton pulls registered to glue.core? Duh, fucking round-robin that shit when a pull-request is made.