Monday, July 24, 2017

Open learning and the race to the Cloud

Last year I got a chance to work on a customer solution deployment based on a Cloud heavy Apache Stratos + kubernetes + mesos setup. It was a brash hands on introduction to the circuitry that power many a cloud solutions. At the end of that engagement, I made a mental note to pay a second visit to some of the technologies I got exposed too when time was at hand.

Not all of us Sri Lankans' have the mental abilities or the tough stomach needed to digest and perform the academic aerobics needed to secure a good free education in our country. Even fewer have the financial capabilities to fund a good education abroad. So naturally, many of us who wish to continue learning find ourselves doing so utilizing platforms such as Coursera, Udacity and Khan Academy. With these platforms you can now access the latest content curated and delivered by experts hailing from prestigious universities halfway around the world. All of these changes tip the balance in favor of those who are truly interested in a subject.

Apart from the platforms mentioned above, as an engineer who’s interested in keeping up with the technologies in my space I’ve found free trials offered by many vendors to be ridiculously useful.

From a provider’s perspective, it’s a great way to market your offering, especially if the long term success of the offering depends on the adoption and loyalty of specialized consumers. This happens to be the case with the Cloud. In the long run, the ultimate winner or winners will be decided by the level of developer traction secured. It is because of this very reason, I believe you can now get a 300$ worth of computation/storage time on the Google Cloud Platform for free, some services on AWS for free for extended periods of time and why you can try out products like Apigee Edge and RedHat OpenShift without having to spend a cent.

Back to the topic at hand, this month I thought I’d get back to where I left off last year. To get my hands a little dirty and get a test of the latest from the Cloud space. I cashed in my GCP green and took RedHat OpenShift for a spin. This post captures a few thoughts from the experience.

Google Cloud Platform

Coming from a middleware background, it’s easy to categorize GCP as an an end-to-end middleware platform in the cloud(forgive my ignorance, I’m sure it's a lot more than that). As such it offers all the components one could hope for when modernizing IT systems.


Infrastructure

Users may ascertain as much or as little control over their infrastructure as they wish. Those who are seeking IaaS level control over their systems may build up staring at the Compute Engine. If the user wants to delegate work from this layer to the platform they may build up from the Container Engine or App Engine instead.

Storage

A bucket is an atomic storage unit in the GCP context, based on accessibility and performance requirements needed for the buckets, the storage options can be grouped into three main categories, Standard; which provides the best SLA for storage that needs to be frequently and globally accessed. RDA; for those storage needs that are less taxing. Nearline/Coldline; for storage that is rarely accessed such as those needed for backup or disaster recovery. 

GCP provides a foray of storage solutions meant for application consumption and insight generation. Everything from RDMS services such as Cloud SQL to no SQL services such as Cloud Datastore. Storage for analytics is provided through BigTable which can provide high write capabilities needed for scenarios which push large number of data to be processed later on.


Utilities

Utility capabilities such as inflight data transformations, reliable messaging and identity management capabilities are provided through solutions such as Cloud dataflow, pub/sub and IAM. 

For anyone who wishes to experience the platform and its breadth, this Coursera course would be a great place to start[1].


What I liked liked about the platform and the free trial,
  • There seems to be a one to one mapping between GCP services/products, and capabilities/offerings you would expect from a middleware vendor. This makes mapping solutions to the platform easier.
  • The platform wide logging and tracing capabilities provided that just works. 
  • The utilities provided to make development work easier such as API for all the services and client side libraries to make tie in work easier, CLI(figure 1) which wraps the service API to provide a convenient means of access to developers.
  • The flexibility provided in creating the consumption architecture; for example for someone who is creating a backend for a mobile application, they may directly consume some of the GCP services(such as storage) or utilize GCP API creation capabilities to consume the services similar to the way GCP API are consumed but with better control[2][3].
  • Responsive support system, I tested the waters with a container engine query and sure enough I got a prompt appropriate response. 
  • The ability to pay for only what you use(which is a given and a key selling point for the cloud). 
  • Looks like the USD $300 given by google can carry evaluation work far.
 


Figure 1
RedHat OpenShift

After building a billion dollar global organization which has withstood over two decades of industry battering, I don’t think there will be many naysayers when I say RedHat has mastered what it takes to make an Open Source business model work. Their key proposition is value addition(be it functional or operational) to generic Open Source, a simple but effective mantra that works!

OpenShift is aimed at the on-premise managed cloud space, which is a fancy way of saying putting the some of the best of tech that make cloud services like GCP and AWS work behind a corporate network, for added control. Therefore, to gain the most of the offering it should be deployed inside an organization's data center.


First impressions and what I liked about the trial,
  • The trial allows a 14 day all pass access to the PaaS solution deployed on either GCP or Azure. If you go with GCP you get about 6 hours of computation access at a time, which is plenty for anyone who wishes to evaluate the functional capabilities.
  • The product UI(figure 2) is easy to understand, difficult to get lost in. 
  • The CLI client effectively envelopes the kubernetes and OpenShift API, making the deployment and routing setup easier to do than otherwise. 
  • Access to docker hub from within the trial setup. 
  • The concise lab documentation that takes you through the key functionality. 
  • The product seems to support the leading conventions and technologies in the space with their own value added spin-offs or vanilla components when they are good as is.



Figure 2






Sunday, July 2, 2017

The Everything Store: from Books to the Cloud

For a while, I was curious to as to how a company known and reputed for selling books become the innovator and a leader of cloud technology/services. The two domains seem unrelated, and the fact that the same founder is behind both ventures seemed coincidental. 

"The Everything Store" by Brad Stone is definitely one of the better biographical nonfiction works I’ve read, I still have a few more chapters more to go but it has already answered the question I wanted it to answer when I decided to purchase it. 

When you get started on the book it becomes clear that Bezos always intended Amazon to become a technology company, he wasn’t really sure how to get there but that was the end goal. 

Time and time again Bezos decides to capitalize on opportunities that common sense/wisdom would say have a small chance of paying off so as such they should be passed on for other opportunities that have a greater chance of paying off big. He approaches some of these opportunities at times like a scientist. 

His decision to sell books on Amazon was fueled by his realization that books were an item that was the same regardless of where you decide to buy it from, this quality makes consumers more likely to brave the untested waters of online shopping, given the price is right! He realizes that books in America were controlled by a handful of publishers and that they are relatively easy to ship in good condition. For someone who has not taken the time into looking to the Amazon story, it might look like from the point Amazon becomes a hit with the consumers it’s all smooth sailing and ventures that pay off big, but that’s not the image “the everything store” paints. 

Before venturing further on how Amazon transitioned from selling books to selling server resources, we need to talk about luck! In business and in personal life the importance of the role luck plays in deciding the outcomes of our actions is often understated. Unlike system design in engineering, in the real world the scope the systems we are part of cannot be defined with absolute certainty, there are just too many known and unknown variables at play. For the sake of argument and the continuation of this review, let’s define luck as all the factors that are beyond the control of the individual that has a moderating or causal relationship with the outcome the individual desires. 

Luck, it so happened was favorable for Amazon and Bezos when they first started out but this was not always the case. What I found out when I went about reading the book were the countless endeavors that Bezos and his team make that have little to no success. The book tells the story of a company that strives to better its core business, book retail while striving to attain the Bezos vision of transforming into a technology company. Work done by Jeff Wilkes on the Amazon’s fulfillment centers shows Amazon’s and Bezos commitment to their core business. 

Parallel to the efforts of “making what works better” Bezos and Amazon shows a relentless desire to venture out into the technological domain, book preview, A9 search and internal system modernization are examples of this desire. Though some of these endeavors had little success it becomes apparent that Bezos has not given up on the vision and when O’reilly proposes to expose Amazon sales data as API for the benefit of the community, it appears Bezos becomes aware of an another stakeholder of technology companies such as Amazon, the developers. This seems to spark the interest in Bezos to provide services to outside developers with the infrastructure Amazon has worked tirelessly to be one of the best in the industry. This decision would have also been influenced by their previous successful experiences of providing warehousing services to external sellers with the superior FC’s Wilkes had built. 

By the early 2000’s the foundation was finally in place for Amazon’s foray into the cloud. Bezos puts a resourceful executive Anday Jassy in charge of his latest pet project Amazon Web Services(AWS). Just like with his decision to start Amazon, luck was favorable. The competition, Google and Microsoft had their attention on the shining new opportunity Steve Jobs and Apple had uncovered with the iPhone. For the competition, on one hand was the proven success and sizeable profits of smart phones and on the other a barely break even, untested opportunity developer services offered. By mid 2000’s Amazon rolls out EC2 and S3 giving it close to half a decade's head start over the competition.

In my opinion the Amazon story tell you that, not all good decisions pay off but if you keep repeatedly making good rational decisions, you milk what works for all it’s worth and you have some luck on your side you are bound to do well. What makes Bezos a great leader to me is his ability to keep identifying good opportunities, be it in business ventures or hiring great people. Time will tell if he will be remembered as one of the greatest.

Monday, May 22, 2017

To each their own (literal parse!)

It looks like I'll be playing around ballerina for a while more, its only fair I document some of my experiments for prosperity sake, especially considering there is a lack of content on the subject, but this blog is probably not the best place to do that. So if you're interested in hands on reads on Ballerina find them here: http://ballerinagist.blogspot.com/




Tuesday, May 2, 2017

Three types of quotes in the world..

If you’ve dealt with the Linux shell it’s very likely you would have come across different types of quotes. They mean different things to the shell and as such they are handled by it in different ways. Here’s a quick review.



Tuesday, April 25, 2017

Error Handling and Functions with BASH

BASH[1], to my understanding is not meant to be used as a general purpose programming language and attempting to use it for such purposes can lead to tricky situations. Having said that, for the sake of argument let’s assume you need to use it for such a purpose. Here's how you can do some of the housekeeping tasks with it.

Error Handling and API Calls


Process exit codes can be used to do a quick assessment of successful completion, however it should be kept in mind that checking the  error codes alone cannot guarantee output is error free and as you expect it to be.

“?” env variable contains the exit code for the last run process in session, where a 0 denotes a successful process exit while 1 denotes a failure. This variable when coupled with conditional statements can bring in some basic error handling capabilities to your scripts.


API calls can be made using the cURL HTTP client. This package comes installed in many Linux distributions but it's advisable to do a check for availability using the package management tools available in the system.

A simple way to validate API calls would be to utilize the -v flag in cURL client to print out a verbose output and then redirect the output to a grep to be asserted for a predefined value.



Functions


Bash provides limited capability to abstract out code to create functions, find more about its capabilities with regard to functions here[2]. As BASH is not capable of creating user defined data types, you might find yourself in a situation where you need to return a bunch of data items back to the caller. A crude way in which this requirement can be achieved is by dividing the data items with a unique delimiter and then breaking up the values on the caller side using awk.



When you start resorting to elaborate methods to get simple tasks done, you are probably extending BASH beyond its capabilities and it’s time to move onto a more capable language such as python.


[1] - https://www.gnu.org/software/bash/manual/html_node/
[2] - http://www.linuxjournal.com/content/return-values-bash-functions

Wednesday, March 8, 2017

Barriers to FOSS adoption and the role of the provider

A couple of weeks ago I posted a writeup on why organizations can’t afford to ignore open source anymore, the write-up was based mainly on my own experiences and knowledge I came to possess as a result of working for a product company that is built around an open source business model. Towards the end of the writing process, I felt there is a lot more that can be said about FOSS, especially concerning adoption and the role providers play in assuaging adoption pains.
....
As discussed in the previous post[1] the drive for Open Source adoption in organizations can come from the industry or broader technological community manifested as ground level traction by the in-house engineers. This driver was looked at by Miralles, Sieber and Valor. Two types of technology adoption views were considered in their research. These two views being,  technical push; a deterministic view, where the decision to adopt is mediated by an accumulation of factors such as technical attributes, the cost of ownership and ability to transition into open source. Organizational pull; factors intrinsic to the organization such as organizational capabilities, vendor-organization match and psychological factors of the decision maker. The research found FOSS to have a greater technical push compared to its proprietary counterpart, in all areas except lock in(ease of transition) but in most cases, it was beaten off due to low organizational pull when it came down to the decision to adopt(2006). This study was done a decade ago, it would be interesting to see if the strength of correlation between the technical push and the decision to adopt has changed over time, looking at the current technological landscape one can assume it has gotten stronger.
The technological superiority of FOSS has been discussed in countless research papers and countless more web articles. Furthermore, even if the strength of correlation between technical push and decision to adopt has gotten stronger from the time the research in question was done, common sense dictates the factors looked at in organizational pull should have a stronger relationship with the decision. The decision to adopt therefore should be made by assessing the potentialities through both views. This is where I feel FOSS is lacking, its ability to convince that it is the right choice regardless of the view one adopts when assessing it. The purpose of this post is to point the reader on how the shortcomings of FOSS can be circumvented through the services of a provider.
Before we can address how a provider can address the drawbacks of Open Source and other barriers to its adoption, we should have a better idea of these drawbacks and barriers. Hauge et al found Lack of support and expertise, difficulties in selecting the right OSS products and ambiguities in liability as major barriers to adoption.(2010) Morgan and Finnegan found compatibility issues, lack of expertise, poor documentation and lack of roadmaps and other documents pointing to strategic direction of Open Source projects as technical drawbacks. Moreover, the researchers found a lack of ownership, a lack of support and difficulties of finding right staff/competencies as business drawbacks.(2007) As mentioned in an earlier paragraph, Miralles, Sieber and Valor highlighted the following three organizational barriers. Organizational capabilities; concerns regarding in-house expertise on the technology and the logistical/operational constraints of building up expertise. Network externalities; the indecisiveness that one might feel due to the phase of evolution in open source. Psychology of decision makers; factors such as the impression adoption decisions of peers have on your own and other individual differences.
Providers such as Red Hat and WSO2 build value on top of FOSS by addressing these shortcomings and drawbacks, providing products that are better tailored to enterprise needs. Providers that have adopted similar open source business models provide paid auxiliary services on implementation, support, maintenance and consultation. These auxiliary services may be availed to a great extent by any organizations considering Open Source adoption but discouraged due to its perceived drawbacks. 

Lack of expertise,

Product expertise will be needed by organizations at various stages of their technology adoption and transition journeys. The specific needs pertaining to expertise at one point of the adoption journey can differ from another, therefore organizations evaluating providers should look into the potential provider's ability to cater to these varied needs.     
Providers should be able to address concerns about an organization's in-house expertise with focused on site consultancy services, documentation, and other training resources. Furthermore, The providers should possess the competency to address concerns that may come about at different points of adoption that are specific to the organization's existing technological infrastructure. Lastly, the providers should have dedicated channels(such as support channels) to disseminate the expertise.
Organizations, in turn, can put programs in place to cultivate in-house expertise of the products, leveraging the documentation and other learning resources provided as value additions by the providers.      

Compatibility issues and concerns of lock-in,

Organizations may get discouraged by possible compatibility issues between FOSS and their existing technological backbone. Though this is a concern that is common to both open source and proprietary software in adoption and transition, in the case of open source, the access to the information needed to decide on compatibility may be hard to come by. Therefore, organizations should be able to get the assistance of the providers when they are evaluating the possibility of adoption. Dedicated channels should exist for organizations to access the information from the providers. Organizations should opt for providers that assist in this evaluation process, some providers may even provide specialized services catered for this very requirement.   
Considering the comparatively low cost of ownership, It may be worthwhile for some organizations to purchase lower tier support/assistance services purely for compatibility evaluation. Organizations may avail these services to run pilot projects with the products, the benefit of such pilot projects are twofold as they will build up in-house competency on the technology.
Some providers may even offer their products as SaaS offerings, organizations evaluating OSS may use these services to get a better idea of functional compatibility of the products. Though it should be noted that at times the functionality of such SaaS offerings may be cropped to improve suitability.    

Concerns with liability and longevity,

As FOSS is maintained by a community unbound to organizational goals and needs, many organizations find liability as a barrier to adoption. Therefore, organizations should expect the providers to address these liability concerns with contractual obligations such as SLA's. Organizations should look to dedicated L1, L2 support from the providers.
As with proprietary software adoption, it makes sense to place the responsibility for the adopted technology between internal resources and external resources of the provider. Some providers may be able to provide dedicated agents that can integrate into the in-house teams to address concerns on behalf of the organization with the provider.
It makes sense to go with a provider that has been in the domain for a considerable length of time and proven domain expertise. Furthermore, organizations may look to the provider’s product strategy through product vision documents, roadmaps and such when assessing longevity.   

In summation, it is my personal opinion that many of the barriers to adoption that prevent the organizational pull favoring FOSS can be addressed with the assistance of providers, with a thought out adoption plan on the adopter's side and the backing of the right provider. That, those organizations that embrace Open Source as discussed in this post would have an advantage over those who have opted for proprietary solutions.

List of References

Hauge, Ø, Cruzes, D. S., Conradi, R., Velle, K. S. and Skarpenes, T. A. (2010) 'Risks and Risk
Mitigation in Open Source Software Adoption: Bridging the Gap between Literature and
Practice' IFIP Advances in Information and Communication Technology 319(1), 105-118

Miralles, F., Sieber, S. and Valor, J (2006) 'An Exploratory Framework for Assessing Open
Source Software Adoption'. Systèmes d'Information et Management 11 (1), 85-111

Morgan L., Finnegan P. (2007) 'Benefits and Drawbacks of Open Source Software: An
Exploratory Study of Secondary Software Firms' The International Federation for
Information Processing 234(1), 308-311      


Monday, February 27, 2017

Open Source, why organizations can’t afford to ignore it?

A couple of weeks ago I ended up here and it occurred to me that for the last year or so I had published posts ranging from WSO2 product related how to’s(what this site was initially supposed to be about), linux how to’s that bordered on immaturity to opinionated tech event reviews from my local community. This thought led to me changing the blog’s name to "The FOSS Merchant".

FOSS stands for Free Open Source Software, think Apache(no, not this guy[1]) and Ubuntu. “Free” in in this context mean something closer to “free speech” than “free beer”. Exercise your linguistic demons here, if you must[2]. Since “FOSS Merchant” sounds like an oxymoron, I thought I should write a few words on the matter.

…..

Open Source Software, as it’s name suggests is free to be used and worked on by anyone. Meaning, most open source software projects are maintained by a community spanning organizations and geographical boundaries. This facet of open source software makes it more sensitive to industry demands and needs, it makes it evolve faster. In brief, organizations that tap into this quality of open source could, in turn, use it to increase their technological agility. 

The days when open source software was considered a plaything for the geeky and not a contender for the professional is far behind us. At present 1 in 3 servers[2] powering the internet is run on Linux and the number keeps rising every year. In fact one survey found Linux to be preferred by 78% of participants(including some fortune 500 companies) for their cloud-based endeavors[3], this figure makes sense considering the popularity of server crafting tools such as docker and the extended support it provides to Linux. Furthermore, there are plenty of examples of product companies that have successfully worked open source into their business models, from Red Hat estimated to be worth 14 billion USD, Oracle’s acquisition of MySQL at 1 billion USD, last week’s news of MuleSoft’s 100 million IPO to WSO2’s 11 year track record of providing leading middleware solutions and driving digital transformation[3] for organizations.             

A common accusation made against FOSS is that it is not as polished or as user friendly as it’s  closed source counterpart. Although this is true on occasion it is my personal opinion that this characteristic of open source software attracts the kind of engineer that end up having a bigger say in technological decisions at ground level, thus they drive organizational changes in terms of technology in the long run(common sense dictates its better to ride this wave than to expend energy redirecting it). This hypothesis is backed by some empirical evidence. In one survey done by The Linux Foundation[4] 52.7% participants cited in house expertise as a key driver for OSS adoption. In another study done on the adoption and transition from closed to open source software, based on the Technology Acceptance Model(an information theory dealing with technology adoption) found a negative correlation between the perceived ease of use and adoption of OSS in the public sector[5]. It is also easy to assume this hypothesis, as one of the factors for the phenomenal success of GIT version control system, when one takes into account the fact that it was considerably different(conceptually and in usage) from other version control systems of the time and was often accused of not being user friendly.

Other than the agility and ground level traction that can be expected by selecting OSS(as discussed in preceding paragraphs), organizations may also take comfort in knowing they have complete access to the code base of the software they are procuring, this may not even be an option with some proprietary vendors. This disclosure coupled with in house expertise on the code base which can be cultivated, in time can amount to technological independence and assurance for the organization. 

However, OSS posses its own novel set of concerns which organizations should address before making the transition. As most open source projects are maintained by volunteers, they lack documentation and other knowledge resources which organizations need to make informed decisions on suitability. Tailoring the software to exact organizational needs and maintenance afterwards may prove to be difficult,  as expertise on the software cannot be readily accessed. As this is the case for many open source projects, it is of paramount importance that organizations select a commercial OSS vendor/partner that can bring in value by addressing these concerns. Furthermore, commercial OSS vendor/partner’s more often than not provide useful features that cannot be found in the original projects.