REST competitors

With all the hype surrounding REST architecture. It may be tempting to think that it is the only component based architecture. However there are quite a number of other architectures that allow for remote cooperation.

Today we will see some of the most popular ones.

XML-RPC

XML-RPC stands for XML Remote Procedure Call. This standard uses XML to encode information about the behaviour to be called on the remote system.

XML is carried over HTTP and requires no special port.

The protocol is widely supported with native support in Python, PHP, Java and virtually all major languages.

As of date, it still powers the API for the popular publishing platform wordpress.

JSON-RPC

JSON-RPC is an acronym for JSON Remote Procedure Call. It is very similar to XML-RPC and in fact can be thought of as a port of XML-RPC.

Just like it’s sister protocol, it is widely supported. However it does shine in readability due to the fact that it encodes information in JSON (Javascript Object Notation) which is easily human readable and easy to parse for machines.

SOAP

SOAP (Simple Object Access Protocol) just like XML-RPC uses XML. SOAP enables a service to remotely trigger a function in a remote machine.

The protocol is web compliant and usually can be served through the normal port 80. However it can also be used with other protocols including SMTP, as such it can be used in a wider variety of environments and applications.

CORBA

CORBA(Common Object Request Broker Architecture) was developed by the Object Management Group. The system was designed to provide standard for interoperability of object based software components in a distributed environment.

Objects publish their interfaces using the Interface Definition Language (IDL) as defined in the CORBA specification.

This means that the protocol is not as web friendly and typically requires special ports open. Though it tends to be much faster than JSON-RPC or XML-RPC since it does not carry the burden of the verbosity that XML and JSON bring.

Pervasive Component System

PECOS (PErvasive COmponent System) is a component based model for embedded systems. It consists mainly of components communicating through ports. It provides an environment that supports the specification, composition, configuration checking, and deployment of embedded systems built from software components.

Depending on your situation, one of the above protocols might make the best business sense for your application. If however you can not make a business case for it or you are not sure what would work for you, stick to REST.

For more on the case against RPC based systems. See

A Critique of the Remote Procedure Call Paradigm

Have you ever used any of the above protocols in your own applications? Lets talk, comment below.

As usual don’t forget to Signup

Facebooktwittergoogle_plusredditpinterestlinkedinmail

CTags management in PHP

We have already talked about ctags before in the entry Text Editors and CTags

However if you are a PHP developer then you may have noticed that the tag command ctags -R . becomes awfully slow for anything but trivial projects.

This is usually as a result of the ctags file becoming too large from indexing entries from composer.

To take care of this problem, simply create two indexes, one for your src folder or whichever folder you use as sources folder and one for the vendor folder which contains all external libraries code.

You can achieve this separation by doing the following from the root of your php application

  1. rm tags \\If you already have a tags file
  2. ctags -R --PHP-kinds=+cf src \\Creates the tags file for your sources folder
  3. ctags -R --PHP-kinds=+cf -f tags.vendors vendor \\Creates tags file for your vendor directory

Now when you run ctags -R . only the files in your src directory will get reindexed.

This also implies that you manually need to reindex the vendor tags by running the command shown in 3 above. This only needs to happen when you update your composer file. Instructions on automating this part are in the Text Editors and CTags article.

If you enjoyed this, remember to signup for the weekly newsletter from the form on the right.

Thank you for reading.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Using status codes and headers

HTTP enables communication between remote machines. Most developers only take care of the content part of the communication, that is the part that the user gets to see.

It is however equally important to engage intermediate machines in the conversation.

3 way communication

3 way communication

We do this by using HTTP status codes and headers.

A sample conversation for login would go something like this

#Request
GET     /protected/resource       HTTP/1.1        
Host:   www.example.org 
#Response
HTTP/1.1 401 Unauthorized
WWW-Authenticate:Basic realm="Protected Resources"    
Content-Type:application/json;charset=UTF-8
{
    "error":true,
    "message":"Unauthorized request"
}

In the above interaction we have informed the user that they are not allowed to make this request. But also we have told intermediary machines/services not to retry the request as well as provided the next steps.

If you we’re using a browser a basic auth login window would pop up on this response.

Do not go crazy with the realm value. It is meant to be opaque, that is your backend system can change without necessitating change of this value.

Next on the conversation, assuming the client has passed the server authentication challenge, is actually fetching the resource.

#Request
GET  /protected/resource HTTP/1.1
Host: www.example.org
Authorization:Basic cGhvdG9hcHAuMDAxOmJhc2ljYXV0aA==        
#Response
HTTP/1.1 200 OK
Vary:Authorization
Content-Type:application/json;charset-UTF8

{
    "error":false,
    "message":"This is my secret please keep it safe"
}

The server in this case confirmed the authorization header. Requests without it or with improper ones would have returned a 401 as we had seen earlier.

Now intermediary machines know that the request is authorized and can repeat the request say in case of network failure.

The Vary header informs the client machine on what other headers influence the request. So clients know to repeat this request an authorization header is required.

However by default GET requests are cached. This is ordinarily a good thing, the server is saved from the extra load of fulfilling requests and the client experiences less latency. In cases of protected resources this may not be ideal, we should consider limiting the amount of time the client and any intermediaries store the content.

This can be done by looping in the machines to the conversation once more as such

#Request
GET  /protected/resource HTTP/1.1
Host: www.example.org
Authorization:Basic cGhvdG9hcHAuMDAxOmJhc2ljYXV0aA==        
#Response
HTTP/1.1 200 OK
Cache-Control:max-age:3600,private
Vary:Authorization
Content-Type:application/json;charset-UTF8

{
    "error":false,
    "message":"This is my secret please keep it safe"
}

The cache control then ensures that this intermediaries do not store the data for more than a specified amount of time, in this case 3600 seconds or 1 hour. The private directive ensures that the cache is not shared or served to other clients.

If you liked this piece, sign up for our weekly newsletter.

Lets keep the conversation going.

Facebooktwittergoogle_plusredditpinterestlinkedinmail