Saturday, November 23, 2024
HomeDevOpsJourney towards A Cloud data Services

Journey towards A Cloud data Services

Now we see cloud is extending all the service and technologies, in that always Data & database play very crucial game. When we focus cloud with some data we should consider database/data services too. In that some factors like security, reliability and as well as fast data serve we have to consider it always. Here we had driven some journey towards data services which you can learn on some limited period.

Week 1: RethinkDB

The first stop was RethinkDB, a Compose hosted database. The advocates set off deploying it on Compose, getting to the admin interface and then connecting with Node.js. They then created a bug tracking application which made use of RethinkDB’s change feeds to update browsers in real-time.

Week 2: MongoDB

Next up, MongoDB, the popular NoSQL database. Again the advocates deployed their MongoDB database using Compose’s easy-to-use platform. This time the application created was in PHP, and they stepped through the process needed to connect Compose’s MongoDB with its SSL certificates and PHP applications. They then got down to inserting and querying data, nesting data and ended up creating an example blog.

Week 3: PostgreSQL

Switching from a NoSQL to SQL heading, the advocates next port of call was PostgreSQL, the indomitable RDBMS. It’s another database available from Compose, and after working through deploying and connecting to a fresh PostgreSQL database, they set about creating a bookstore database at the psql command line. This database example comes complete with a look at the use of foreign key relationships, the PostgreSQL ARRAY type and constraint checking. Once that’s all setup, they move on to connecting a PHP, Node.js, Python or Go application to the database.

Week 4: Cloudant

The advocates moved onto their first non-Compose database, Cloud Data Service’s Cloudant, the JSON store based on Apache CouchDB. This time they deployed the database through the Cloudant Dashboard and showed how to communicate with it over HTTPS using curl. Populating the database with student information using POST methods and showing how to modify it with PUT opens up the examples which then move on to using Cloudant’s Views – it’s data fetching, selection and analysis features – to create complex queries. Then there’s some fuzzy matching using a Lucene-based search engine and wrapping up with a brief look at the web interface.

Week 5: etcd

The next stop was etcd, a database with a very specific purpose in its design, to provide a single source of truth. Using a Compose deployed cluster, the advocates dove into etcd’s HTTP interface and the command line to work with the basic features of etcd. Then it was out to node.js to build a simple configuration manager using etcd. Configuration management is the thing etcd was built for and it’s got the features you need for the job; the ability to wait for changes, a structured hierarchy of key/values and all in a fast, secure, consensus arbitrated cluster.

Week 6: IBM Graph

Graph databases excel in managing complex inter-relationships between entities which make them ideal for the mesh of social connections that define the modern world. It’s here that the advocates look at IBM Graph, based on Apache Tinkerpop, and run an instance of it on Bluemix. From there they create a graph, a schema to structure the data in the graph, and then add vertices (aka nodes) and edges to populate the graph. For their example, they model people and interests and then search for friends with common interests. With a traditional table or collection-centric database, this can be a recursive challenge, but for a graph database, it’s a basic capability of the model.

Week 7: Redis

The final stop on the seven-day tour is Redis, the essential in-memory key-value store which can be valuable in most application stacks. Most commonly used as a cache, Redis can reduce the load from common queries. But it is also a versatile data store, with hashes, sorted sets and scored sets and more allowing applications to use it as a way to share common data or keep running co-ordinated counts on common values. Add in its publish and subscribe features, solid atomic operations, and controllable persistence.

End

And so at the end of the seven days, seven databases have been visited. They’ve covered the generation of real-time updates, the indexing of JSON documents, the relating of data across tables, the creation of sources of truth, the traversal of social graphs and the rapid caching of essential values. Each database has value in most application stacks which is why its worth visiting each, if only for a short while, to know when it will pay to use it.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments