Skip to content


CSPs – it is NOT the customers’ fault

Many years ago I used to say that the one trouble with the IT vendor community is that it understands how to be `bought from…’ but has little idea how to `sell to….’, and over the intervening years I have not seen too much evidence of any change in that business model.

What is worse, the advent of the cloud is showing that, not only has that community not really changed in expecting the customers to come and buy – and build their businesses accordingly around what the technology can offer – but have also largely missed the fundamental shift that the cloud brings with it – namely the need to sell services, not technology.

The Lloyds Bank UK Business Digital Index 2014 demonstrates some important evidence of this situation. What is more, it is evidence that highlights where the majority of the vendor community are going wrong. The survey has shown that just 50 percent of small businesses have a website, with the majority of these only providing basic functionality, while just under a third (29 percent) believe that being online isn’t relevant for them.

Given the number of years that web capability has been available, the fact that nearly a third of the survey sample can’t see the relevance of exploiting even that simple marketing tool is sign of the failure of the CSP vendors to understand what they are trying to sell to their potential customers, especially the SMEs.

And the true nature of the problem is demonstrated by a response to the Lloyds Bank survey put out by 123-reg, the UK’s largest domain registrar. This demonstrates what is, to me, the classic problem of the inbuilt `bought from’ approach of IT vendors. `The report points’, its statement says, `to a worrying lack of understanding among UK SMEs when it comes to getting their business online. Businesses must plan for the future and wake up to the imperative of having an online presence’.

In other words it is their fault, rather than the fault of the IT and cloud service vendors, that they cannot see too much relevance in exploiting the technology.

In a statement, Matt Mansell, Group Managing Director of 123-reg’s parent company, Host Europe Group, said: “It’s staggering to think that in 2014 just half of small businesses have an online presence and that one in three seem to have dismissed the opportunity altogether. Having an online presence was once a nice-to-have, but it’s fast becoming a precursor to running a business. The movement of traffic towards the web is staggering, with more and more of us turning to the internet to assist us with simple everyday tasks.

“By not having a website, these businesses risk missing out on some very real opportunities. We aren’t just checking the news and accessing social media sites anymore, but checking whether or not the hairdresser is open and what new produce is in the local deli. Even if you aren’t selling on your site, it’s a great window to countless potential customers. If they can’t find you, and see what you’re all about, the chances are that they will take their custom elsewhere.”

And up to a point this is correct.  But what are the services they are being offered? Most often, I suspect, they will in the form of technology deliverables that make little sense to the bosses of small business struggling to keep their heads above water – or keep up with the flow of business coming in.

It is up to the industry to actually configure the final, end user services that the SMEs need, not just dump a bunch of `assorted technologies’ on their doorstep and tell them it is up to them what they do with it all.  This is like giving SMEs a tree stump, a block of graphite and a spoke shave, and saying: “if you’re clever, you can probably make some pencils out of that.”

The Parallels conference in February showed that the service providers, and their channel partners, largely do not understand their marketplace and what it requires. As the CEO of a business selling the services of an application to end users through CSPs said to me recently, “I have come to realise that most CSPs have no idea what their customers do.”

If they do not understand what their customers are about how on earth are they going to sell them anything? All that will be possible is that they sit there and hope that some customers come in and buy something. What is worse, I suspect many of them do not yet understand who their real customers ought to be, how to find them, or how to partner with them.

Those customers should be the businesses and organisations that have the brand names that resonate with the SMEs. They may well be trade associations and similar organisations as well as commercial businesses. And what they will need is the aggregation of both the compute resources and the applications and tools relevant to their market sectors. Those businesses and organisations can then go out and service their subset of the SME customer base with a package of services configured to the needs of a specific SME marketplace.

Then the SMEs will buy because they are being offered solutions relevant to their business needs, and where they can see the sense in using those services. They will thank their lucky stars that someone thought of offering them a service that was a no-brainer to sign up for.

Posted in Business.


Testing is important, but think of the wider context

Testing is always important, that is for certain. But sometimes I do wonder whether some testing is looking at the wrong issues, or perhaps the right issues but for the wrong, or maybe poorly thought-through reasons. It has to be set in the wider context of purpose – and that purpose has to be enhancing – or at least maintaining – a customer’s experience of using a service.

Take, for example, some news from US-based cloud load-testing business, BlazeMeter. Earlier this year, the company introduced a service that is, on the face of it, a must for all startups, especially those looking to offer applications and services in the cloud. It is certainly going to be an important part of ensuring the customer experience is not defeated at the first hurdle.

And in that context, the thought keeps nagging away at me that, while this testing is both valid and important, it is also looking at only half of a much broader experience management issue.

BlazeMeter, which already offers the JMeter-based load testing cloud, launched a new support program for startups that offers a free, six-month package of its open source compatible load testing cloud services to qualifying startups.

This means startups can ensure their web pages and mobile applications will hold up under high demand and perform as expected, delivering a seamless user experience when brought to production.

The new program provides startups with 20 free performance tests per month for up to 1,000 users per two-load servers, and two weeks of data retention. This represents a useful saving of more than $2,000 a year.

The reasoning behind this is simple. P“We want to lend a hand to our fellow startups with this package, which will allow them to run sophisticated, large-scale performance and load tests quickly, easily and affordably”

rovisioning dozens of test servers and managing the distribution of large-scale load tests can present significant cost and agility barriers to most start ups.

BlazeMeter’s cloud-based testing solution not only solves this problem but also maximises the speed at which development teams can gather valuable load-testing metrics by offering the best options for scalability, cost savings and geographic reach.

The BlazeMeter cloud, which is 100 percent compatible with Apache JMeter, also allows developers and operations teams to select which global locations they want to review the load and response times of their applications for without having to stand up a data center in each location.

This is, obviously, an important capability. Putting out a web service that dies at the first sign of any significant load will certainly be a zero experience for customers, and therefore very bad for business. But expected high-stress workloads are one thing. The real world of providing web services is full of the unexpected – and it is the unexpected that can get in the way of users of such applications and services conducting whatever transactions they had planned.

The most common problem is that visitor traffic to web services ends up being un-managed: they all arrive and there is no prioritisation, it is fair shares for all. But from whatever perspective the provider of that service is coming – and most often it will be the generation of business and revenue in some way – visitor prioritisation is a key capability.

It will be valuable indeed that the testing BlazeMeter undertakes shows that a web service can handle 1 million accesses a minute with ease. But if 99 percent of those hits are, for example, from `tyre-kickers’ simply crawling round the web aimlessly that can be bad for business. It means there could be another 10 percent with real purchasing requirements who cannot get on the website.

And don’t say you have never done your bit of aimless, web-based tyre-kicking, I know I have.

There are now ways of managing that prioritisation, however. Tools such as vAC from NetPrecept can identify the actions of those site visitors with obvious intentions of buying and can, at times of high-loading, ensure that they are prioritised in terms of access.

The tyre-kickers can find their access not just restricted but even terminated in a positive manner, such as being informed what is happening and being given a `priority access’ voucher to use when they come back.

Actions of this type not only ensure that the business is conducted as efficiently as possible even during times of high stress loadings – for example, vAC can allow users of a website actually making a transaction to continue to completion even if the website suffers a Distributed Denial of Service attack at the time – but also make an important contribution to the wider issue of at least maintaining, and often enhancing, a customer’s experience of using a website.

So testing of the type offered by BlazeMeter is important, but in the context of enhancing the customer’s experience of using a service it is only part of the battle – and can end up helping to create a problem in its own right. Without the actual access process being managed and prioritised even very high capacity websites will get swamped. If that leads ISVs and service providers towards the notion that adding even more capacity is the right answer then they are simply sliding down a vicious spiral of ever-increasing cost.

It is not just access capabilities that are important to good customer experiences, it is managing who gets priority, and why.

This opinion piece was first published in Cloud Services World

 

Posted in Business.


IDaaS can now be the foundation of cloud security

It is still quite easy to think of Identity Management as a very specific, narrowly defined part of the overall security regime a business needs to apply, particularly if it is operating much of its information management services in the cloud. But in practice, the ID of users is now fast becoming the key component in providing a complete suite of security services, all driven by the definition and management of ever-more granular operational policies.

The control of any user’s ID, on any device, operated over the cloud from anywhere in the world that an Internet connection can be made, is now the start point for the application of full management on what applications and services that user can access, where and when they can be accessed, what they do with the data, and what privileges they are allowed as users.

It also provides a one-stop-shop for the de-provisioning of any individual as soon as that function is required.

What is more, this moves control of all the core aspects of how the IT resources are used back to the IT Department, which instead of being in charge of the daily machinations of the technology in use – that role now largely slipping away to many different types of third party hosted service providers – now become the guardians of the overall `business process’.

This is certainly the view of Darren Gross, the European Director of ID management specialist, Centrify, a view that he would suggest is borne out by the company’s growth rate.

“IT is now getting back control as it can now provide finely granular privilege management for controlling appropriate access to users, and this is being taken up globally,” he said. “We are now growing our partner programme geographically with local partners, the ones that understand the local culture, language and issues in their regions.”

The company has recently announced that it has expanded in Europe, Middle East and Africa, doubling the company’s own headcount in the region and growing its partner channel by 139 percent. And Gross is still on the look-out for others.

One of the keys to this growth rate is the fact that Centrify delivers ID management capabilities as a cloud service – IdaaS. This makes it an easy deliverable to partners, and an easy management issue for the company itself. Centrify can itself provide the service to end user businesses that have been identified and sold the service by a partner. It can also allow some partners, particularly those with hosting capabilities in place, to host and manage service delivery themselves.

The control of ID authentication and user access means that IT now has full control over what the company calls Unified Access. This covers who is provisioned on a service, what applications they are allowed to use, where and when they are used, and what client devices are used to access them. And because it is a cloud-based single sign on (SSO) process, it is possible for individual users to choose just about any practical device they want, and at the same time, allow IT to identify the actual device being used as part of the overall user authentication  process.

The equally important corollary to access authentication in an IDaaS is the ability to then fully audit the activities of individual users. This does mean that reactive investigations of security breaches can become both thorough and straight forward, with the processes and individuals concerned being readily identifiable.

But it also means that proactive, policy-based security regimes can be established based on the application of real-time analytics to the comprehensive audit data that the Centrify system produces. This does suggest that it can form the foundation of a policy-based `stop activity’ operation, where any unusual operation by an individual user can be terminated as it is begun.

According to Centrify’s European Technical Director, Barry Scott, the company has already moved some way along this path, with the development of a `stop and justify’ routine, where user actions can be suspended and the user asked to justify their actions. This, at least, can capture the activities of malware such as GOzeuS and CryptoLocker which operate in the background without direct user knowledge.

“In practice, however, this type of development is something we would leave to our partners, as they are often the specialists in such areas,” he said. “For example, we already have a partnership with Splunk on the use of audit data to help identify issues and problems with the operation of complex, cloud-based infrastructures.

“We also have a close partnership with Samsung, where we provide the enabling technology for Samsung Knox.”

This introduces the use of containerisation into the smartphone and tablet world to provide complete separation of the personal and work environments on the same client device.

Gross is also aware that the potential of Centrify opens up a number of different opportunities for its partners to not just resell the IDaaS, but build useful services on top of it. For example, there is an oft-discussed trend towards more individuals becoming self-employed contractors rather than salary-earning company staff. That way they get to use their skills across a number of different businesses, and the businesses get to only pay for the time the contractor commits to their specific projects.

Loose `federations’ of this type could benefit from a service that exploits IDaaS to allow the creation and disbandment of groups of contractors where the access privileges can be tightly defined and controlled, and the workflow fully audited for both security and billing processes.

Centrify has recruited a number of additional partners across EMEA, expanding its channel by 46 new partners.  These include Nebulas, Quru, Somerford Associates, SecurityMatterz and AT Computers in the UK.  The company has further expanded in EMEA East and the DACH region (Germany, Austria, and Switzerland), building new partnerships with Fujitsu Technology Solutions, Science+Computing (a Bull Group Company), Cross Media and Mint.  Centrify has also recruited three new Value Added Distributors, Hermitage Solutions in France, IREO in Spain and Portugal, and Inforte in Turkey.

Posted in Business.


Time to automate parallel development tools

An interesting – and arguably important – example of a favourite hobby horse of mine emerged at the recent Intel Software Development conference in Chantilly, France. That hobby horse is the way that technologists get so carried away with the undoubted cleverness of their developments that they sometimes miss the point that, ultimately, it is technology’s application in the real world that is the important factor.

The Intel conference is an annual spring-time event focussing on programming and applications development in the world of parallel computing. To be fair, you can’t get much more `technical’ than that, for parallel processing is largely the domain of supercomputers processing esoteric meteorological, scientific and engineering problems at PetaFLOPS processing speeds (that’s quadrillions of calculations per second).

Except that this now is a `used to be’ scenario. Parallel processing has been creeping into the upper echelons of mainstream computing for a while, but with the advent of cloud computing that creep is getting much faster. Now add in the advent of mainstream big data analytics and the need for parallel processing is fast changing from `nice to have’ to `essential’.

The combination of big data analytics and the cloud is also happening – it is only a matter of months, and more likely a couple of weeks, before SAP announces that its in-memory analytics engine, HANA, is available as a SaaS delivered service. When that happens, the need for parallel processing power in the cloud will be the only sensible option.

So, here then is Intel, developer of x86, the leading parallel processing architecture. This is also at the heart of most commodity servers used as the basis of cloud infrastructures, where packaged applications and service development environments are commonly found. Despite this congruity, it became clear at the conference that the company sees no need to now to move in the direction of building automated development tools for parallel processing applications. This is despite acknowledging that application areas such as big data analytics really do need them.

The argument put forward by the company’s Chief Evangelist for software products, James Reinders, is understandable. Basically, it is that automating development leaves open the possibility of building in processing and operational inaccuracies to code that could spawn and go viral in a parallelised cloud environment. On the face of it that is a very good argument and something certainly to be avoided.

But on the other hand, one of the parallel development tools Intel has produced allows developers to bit-flip – change the state of a single bit in a single byte of code. That would seem to put a good deal of trust into the developer’s hands to get it right.

And on another hand, the company has just released such a packaged-up development solution for a specific problem. This is the new HTML5 Development Environment, for developing apps across a range of different devices. This is aimed to help with issues such as working with different architectures for cross-platform applications, developing for different aspect ratio displays and different user interfaces.

It supports development for iOS, Android, Nook, Amazon, Windows 8 and deployment in the following stores – Apple App Store, Google Play, Nook Store, Amazon Apstore for Android, and Windows Store. It also supports delivery of HTML 5 web applications for Facebook, and Google amongst other environments.

Reinders sees the HTML5 Development Environment as a one-off special case. Personally, I see it as the first of many similar tools, which can then be linked together to build richer, more comprehensive applications and services. For example, the HTML5 tool would make the obvious output/delivery component of a big data analytics service development and orchestration suite for cloud environments. That would appeal to not only big enterprise end users but also every Cloud Service Provider business looking to offer richer levels of customer service and engagement.

I am sure some company will do just this in the not too distant future. But Intel sits here with the skills, the knowledge and the capability to do it right now, yet it seems strangely reluctant. Maybe it is scared of some future anti-trust case for creating and `owning’ the dominant development environment?

If that is the case I’d just suggest they do it and be damned. The need is about to get much more important than the legal implications. And sometimes the law just has to play catch up.

Posted in Business.


What has technology to do with the cloud?

With Yahoo changing its description from `media business’ to `technology business’, when it is even more a media business than ever, the question needs to be addressed – are we now too hung up on the hook of `technology’ just as much of the world moves `post technology’?

Just before Christmas, in amongst a set of predictions for this year, 2013, I wrote the following:

Start of a `Post Computer’ World

Cloud-delivered services carry with them an important difference – the focus of attention on the `service’ being provided, what is actually required from it and what benefits it brings to the business as a whole. In many ways, the means of delivery is less relevant.

With IT, however, there still remains a great deal of emphasis on the `T’ – technology. This means technology for its own sake.

It would seem that the `backlash’ against what some have started to call a `post-technology’ world has already started. It seems technology for its own sake is setting out to defend itself early. It seems like the technology companies are starting to feel that if they stop getting written about – and being `important’ as technology creators and providers – they will cease to exist.

But when it comes to the cloud at least, they are no longer that important. It is the services that are built upon the technology which, to a growing number of end users, are the important element. Many tech companies seem to be missing the point of this change and why it is more important than the tech itself.

For example, according to a recent press report, Yahoo has changed its official description of itself in important US Federal documents. In its latest 10-K filing with the Securities and Exchange Commission Yahoo now refers to itself as a `global technology company’. In the past it has called itself a `digital media company’.

Another press report covered a presentation made by Anthony Miller, Managing Partner in the analyst firm, TechMarketView, to the recent annual conference of Intellect, one of British technology industry’s main representative bodies. In it, Miller suggested that the cloud – and SaaS services in particular – could `evaporate’ over the coming year.

The move by Yahoo – a company that has been a cloud service provider and a digital media business before either concept was acknowledged in the marketplace – is arguably amongst the dumbest marketing moves ever. As already stated, the technology is not actually irrelevant in practice. But it is in marketing terms, especially when it comes to cloud and digitally-delivered services.

The number of people who really need to know how services are created, delivered or managed is reducing rapidly, while the level of technical knowledge for most users is going (relatively speaking) down. Most people who use Google have no idea they are using a cloud service, or even what a server is. They don’t know what a Gigabyte is, they just know that it now stands for `a bit of storage, like a bookshelf or two’, while 500 GB equals `a fair-sized library’.

What the users want is a service, like being able to text, talk, take pictures and look up pizza parlours on a mobile phone. They don’t need to know the transmission frequency.

But by becoming a `technology company’ Yahoo is doing a couple of backward-thinking things. One, it is showing it is stuck in the Silicon Valley mentality where `tech’ for its own sake is seen as next to godliness. Two, there is a whole world out there that doesn’t give a damn about it, and it is those people which now represent the real marketplace.

Yahoo would seem to be configuring itself to ignore its primary marketplace – the end users. Instead, its self-description suggests it has decided it wants to be known as the maker of piston rings, rather than the source of the dream of the open road.

I know some describe this as an example of a model known as `Technology as a Service’ (TaaS), and I can see where they are coming from, but I still end up feeling it misses the point. The key driver now is what users actually do, what they aspire to achieve with the technology rather than the ability to appreciate it `coolness’. TaaS, like `Private Cloud’, is arguably a good intermediate descriptor through which business users – larger enterprises in particular – can transition their corporate mindsets. But in practice it is very much like saying `really only slightly different from on-premise’.

To me, it is rather like saying Leonardo da Vinci was a `paint application technologist’ – he was. But is that the summation of his capabilities, talent or contribution, or was there something else about him, and was that perhaps more important?

As for Miller’s reported assertion that SaaS will `evaporate’ I can see why he might view the situation that way, though I feel it is a viewpoint that comes from a technology-first standpoint.

Miller himself is reported to have voiced the opinion that vendors which take their existing on-premise applications and push them out into the cloud as SaaS services may not be doing the most sensible thing. Their chances of success are likely to be limited, and will primarily play to the natural conservatism of larger business users.

History already shows the trend that SaaS brings in its wake. It used to be Siebel for CRM, but in the cloud it is Salesforce, or perhaps SugarCRM. SAP still holds sway in Big Enterprise ERP systems, but in the cloud it is more likely to be NetSuite. And while Hyperion is the daddy of on-premise Corporate Performance Management systems it is companies like Host Analytics that are carving out the SaaS-alternative trail.

Miller makes a couple of interesting, if debatable, points about SaaS, such as it costs more to deliver software as a service than it costs to deliver software on a disk, and that the flexibility inherent with SaaS should come at a premium, not a discount.

The first one I would view as being slightly contentious, not least because it is not really comparing eggs with eggs. Getting an application on a disk is only the start of the cost cycle for the end user, though it is the end of the delivery cycle for the vendor. From that point on, everything else is an additional cost, usually for the lifetime of the application.

SaaS costs the vendor more to deliver, not least because there are resources and bandwidth to provide, even if that is only rented from another service provider. But SaaS is also like the semiconductor industry, where it costs $billions to make one processor chip, but after a while they are being stamped out for a few bucks-a-chuck.

In the same way, new users for many SaaS services become an increasingly marginal cost for the service provider as important cost areas such as system management and support are shared between the user-base, rather than being an individual, total cost that each user has to bear.

When it comes to flexibility then yes, that is currently being delivered as a cost saving to the end user, rather than a premium. But stating that is to also forget that SaaS is at the beginning of its life as a marketplace. Flexibility, and more importantly end user business agility, will no doubt come to be valued worthy of premiums – particularly where the flexibility and agility is delivered by service providers that understand the real service needs of end users.

Miller also points to how SaaS posture child, Salesforce, is still making a loss and boasting about it. Yet all businesses make a loss in their early years, especially if introducing disruptive developments. It is hardly unusual. And others might point at established big systems manufacturers with a penchant for big hardware and complex on-premise systems software that still make most of their revenue from printer inks.

And let’s not forget that lasers were first seen as `a solution in search of a problem’ but are now the backbone of broadband Internet. And 1Kbit semiconductor memory chips were considered interesting, `but won’t replace ferrite core memory systems in computers’. Yet I am now not alone in having 16Gbytes of storage on my key-ring in a freebie memory stick. I would venture to suggest that SaaS is unlikely to `evaporate’, regardless of how hard the technology-delivery businesses try.

As a final thought Miller himself is reported as bemoaning the decline of IT as a marketplace, which is said is in the sixth consecutive year of market decline. He suggests that the IT market will grow less than GDP till the end of the decade, and probably indefinitely. But that is to assume that IT as in industry is a core marker of economic activity. It may have been for a while, but I would suggest that it will be the businesses that best exploit the capabilities of IT and the new ways it can be delivered and consumed which will be the coming touchstones of economic success, regardless of their actual marketplace or business sector.

After all, Google is already best known for its cloud-delivered services and applications rather than being one of the biggest manufacturers of server systems, though big in server making it certainly is. But making its own servers is not core to its business, it is just the most convenient and economic hook with which to catch a much bigger fish.

That means Google does not buy servers from `the marketplace’, thus contributing to its decline. And that is a fact of life. The technology vendors face an uncertain future of declining revenues and declining influence. SaaS, and the cloud in general, faces a much stronger future where making a loss at the moment is likely to be a passing fad.

Posted in Business.


Cloud alternatives a much better bet than HS2

The last month has seen a small avalanche of comment on the UK Government’s decision to go ahead with the High Speed 2 rail line from London to Birmingham, and onwards to Manchester and Leeds. Much of it has been negative, not least because of the understandable NIMBY response that accompanies any major physical infrastructure project. But there has also been a good slice of comment on the nature of the investment itself, and it is to that I intend add further thought.

Investment in UK infrastructure is most certainly needed but railways are, in the light of Government aspirations to build a dynamic, knowledge-based, hi-tech oriented economy for the future, heading off in entirely the wrong direction.

Just after the announcement I heard a pundit (never did learn his name) on TV referencing the French and their High Speed Train experiences and the benefits that accrued. To which the answer is `yes, but…..’. The TGV investments were years ago. What is more, if the French had been thinking about HS2 they would have had it built and running 10 years ago at least. Taking 20 years to build HS2, and not even starting that process for a few years yet, makes the investment just nonsense.

If they had actually set about this investment when it was first being talked about I would have been all for it. But this late in the day I end up feeling this is like someone saying “we need to services for industry: let’s make carbon-fibre Penny-Farthings.”

What is the investment for?

To make the investment relevant what is needed is to go back to the basic question: what is the investment for? Now, if it is primarily aimed at creating jobs in its construction then that is a laudible objective, but is only a short-term answer. Most of those jobs will be gone once it is built. If, however, it is to provide a platform on which both businesses and individuals can build futures of benefit, worth and value (of both the tangible and intangible varieties) then there are serious alternatives available that are cheaper to install, faster to implement, and of greater value to those that use them.

What is even more important, the phrase `those that use them’ should really encompass as many people in the country as possible, not just those close enough to make sensible use of a particular stretch of railway.

Two better investment options are clearly obvious, and both are likely to be relevant to cloud computing and Cloud Service Providers (CSPs). What is more, they are not mutually exclusive options, and even if both were implemented simultaneously they would save a large slice of the currently planned HS2 investment of some £30bn and could, I suspect, be implemented and delivering a return in a maximum of half the time allotted to HS2.

Make the trains we have work better

The first option is a significant upgrade to the current railway system, which already covers most of the country. The objective would not be to make those lines faster in terms of train speeds, but to greatly improve their capacity by a rebuild of the signalling and train management systems so trains can run much closer together.

Cloud-based `big data’ analytics tools could not only cope with the volume of real time data and analytics needed to manage this process, but could provide the resource redundancy needed to ensure the highest possible service levels and reliability. There would certainly need to be sufficient capacity to provide real time monitoring and analytics of all services and rolling stock so that failure modes could be tracked as they are developing and pre-empted before they finally develop.

This would benefit far more people and businesses across the country than HS2, and there are already many companies able to provide the technologies and skills needed to make it happen, so there would be plenty of scope for competition – and second-sourcing of supply for many of the core systems and components.

Finally, this option could even help reduce crime figures by re-cabling the entire train network with fibre optics – and give Network Rail a helpful revenue boost through selling the redundant copper cable and over-specifying communications capacity requirements so that the extra could be made available to the open market.

Wi-Fi `blanket-bombing’

The second option is to commit to blanketing the entire country with the highest capacity Internet services everywhere. And the cost of implementation could be kept in check by using Wi-Fi to `blanket bomb’ everywhere, including those hard-to-reach locations where the cost of cabling the service has traditionally proved prohibitive.

As with the `Six Million Dollar Man’, we do have the technology, from many possible suppliers, and with a wealth of options on how services can be delivered. This is the type of project which could be implemented and providing benefit across the country before the first, Birmingham-bound stage of HS2 had got much past Amersham.

And given the oft-stated wish of the Government to turn the UK into a knowledge-based economy built upon the skills of using and developing hi-tech solutions, country-wide high bandwidth internet coverage makes far more sense and much better value to all than a railway line between London, Birmingham, Manchester and Leeds.

Finally, a short history lesson: Ten to 15 years ago, BT was bemoaning the fact that it had laid vast amounts of fibre optics cable across the Atlantic and it was finding it difficult to sell the capacity to anyone. This unused Dark Fibre was, for a while, a significant issue: what was to be done with it?

But back then Google was a babe-in-arms of a business, while Facebook and Twitter were not yet twinkles in inventor’s eyes. The availability of the Dark Fibre here and around the world played an important part in making such successful developments possible.

The same is true now. Universal, country-wide, high capacity and high performance WiFi would be the spawning ground for limitless new developments, businesses and whole marketplaces that no-one has yet conceived. That is so certain it would even make sense to deliver the WiFi services free. The return would be well worth the extra expense.

And it would not only provide existing businesses with the tools needed to improve their current operations and make them far more agile to meet market changes, but also give new businesses the tools needed to spring out of the new ideas such an investment would bring.

Perhaps even more important, it would also give consumers the tools needed to buy, use and enjoy the products and services these businesses create. And those consumers would be all over the UK, not just on the London-Birmingham-Manchester-Leeds axis.

As a final element of enlightened self-interest for CSPs and cloud technology providers, the cloud would be the only sensible way this nationwide `service’ could be delivered. Major hosting services would be the host for many of those new businesses, most of which will be delivering their new products and services as SaaS.

Posted in Business.


Embassy’ answer to data sovereignty

There has been a spate of stories recently about the security of data being held in third party datacentres, and in particular the `security’ issue of the Governments with jurisdiction over those datacentres claiming – and increasing, exercising – rights of access to that data.

This, once again rattles the cage of data sovereignty and the issue of the need for national Governments to have laws that ensure data about that country, its commerce and industries, and its people. It is hardly any wonder that some countries therefore remain particularly edgy about where such data is stored.

The corollary of this, however, is that is that it serves to inhibit the very business advantages – in particular the flexibility and agility needed to meet and exploit changes in markets or business practices – that cloud-based services can deliver best.

But ways of circumnavigating this problem are starting to appear. There is now the chance to significantly reduce data sovereignty as business necessity, and long term possibly turn it into an irrelevance.

One such is the new Software Defined Datacentre (SDDC) from CohesiveFT, and CEO Patrick Kerpan, speaking with Business Cloud 9 at this week’s Cloud Expo made it clear this is an opportunity he has long-term designed upon. The goal is to be able to offer users the opportunity to create a logical instance of `business environment A’ that is working to the laws and business rules of `country B’ but have it running  on datacentre resources located in `country Z’ without it being either an issue or a security problem.

I have written before about this requirement as what I have called the `bonded warehouse’ model. This where this instance is the data analogy of the bonded warehouse at a port of entry, where imported goods can be kept as though they were not yet landed. So they were free from tax or tarrifs and the application of local legislation on issues such as health and safety, until such time as the importing company extracts them from bond to be sold.

Kerpan prefers the analogy of the national Embassy. “The Embassy of a country is part of that country, regardless of what country it is in. The Swiss Embassy here in London, for example, is really Swiss territory, not just a bit of London whewre the Swiss diplomats happen to work,” he explained. “The Software Defined Datacentre can create exactly the same thing for cloud users.”

The SDDC approach is based around what Kerpan calls a cloud container. Set aside any thoughts of anything physical, such as an appliance, being required. This approach is entirely software based.

“This is intended for those that want to use cloud services rather than those that aim to provide them” he said. “It is about how to migrate applications to the cloud. Applications need a set of ambient services, such as LDAP for example, that surround and support them so they work effectively, so all those services need to go into the container with the application. If the IP address for the application is changed the container takes with it everything the application requires.”

In broad approach this is similar to the Application Packaging Standard being promoted by Parallels, though the key element of the SDDC is its image management technology that pulls together all the components needed to make up that application’s complete working environment.

In turn, this makes it possible for enterprises to aggregate a number of complementary containers into a single logical resource. And if, at some time in the future those applications need to be redeployed in a different logical resource – even in a different datacentre environment – the container approach makes this a far more simple task to complete through a logical set of steps.

This allows containers to be used in private, public and hybrid cloud environments.

What stands against extending this functionality out into the `Embassy’ model is now just the law. A container running `environment A’ on a datacentre in `country Z’ would not be a concern because it would still be – legally, logically and technically – operating in `country B’.

And if part of the image associated with an application was a security policy implementation package, the container could even defend itself against intrusion or attack. It is not beyond the bounds of reason that it could be equipped with the tools need to remove itself from a datacentre and install itself in different logical or physical location.

For now however data sovereignty laws would stand in the way of such an approach. While this may not be a problem yet there is every chance it will become one. It already restricts the flexibility and agility of action that some companies would like to have at their disposal and it prevents some sectors of the cloud services marketplace from developing fully.

For example, Amazon has already demonstrated the potential of a global cloud marketplace – if only for service development purposes. But there is no reason why global markets for CSPs trading purely on capacity, resources, performance and core service provision should not develop.

By the same token, service providers offering specialised tools and localisation capabilities could make sense as the local host for multinational business, without the need to slice and dice business processes to fit what data can and cannot be stored or processed outside of a specific country’s jurisdiction.

This first appeared in Business Cloud 9

 

Posted in Business.


Cisco-Parallels deal the mark of something bigger

Interesting trends are building up around cloud service software provider, Parallels, just ahead of the company’s major annual bash, the Parallels Summit, being held next week in Las Vegas.

The prime one is the news that Cisco has bought a small, 1% stake in Parallels. This has raised a goodly number of questions about the Cisco’s intentions. Most seem to see it as in some way aggressive, both against VMware, particularly in the desktop virtualisation market, and against Microsoft and in particular its 365 cloud efforts. I would beg to offer a different thought, however.

Back in October, at the Citrix Synergy conference in Barcelona, one of the main keynote speakers was Cisco Chief Technology and Strategy Officer, Padmasree Warrior. She was present because Cisco and Citrix were announcing a new and significant partnership. It was significant because, as I wrote at the time, Warrior was one of the first executives from a major IT vendor to acknowledge that we live in a world of many clouds.

This is part of what she said at the time.

“We see a future of IT changing fast. By 2015 we expect to see 50 times more traffic through smartphones than now, with some 10 billion mobile devices, and more than half of all IT delivered via the cloud. And there will be a need for cloud orchestration for a world of many clouds.

“So we aim to bring together Cisco Unified Computing System, Cisco ONE components and Nexus switches, with Citrix CloudPlatform, Apache CloudStack and Citrix XenServer. The goal is to simplify cloud management and remove the current complexity.”

And now Cisco is not just partnering with Parallels, but actually taking a small stake, which also gives it a seat on the Parallels board. This can certainly be interpreted as a move with some aggressive intentions, but it is also possible to see it as a defensive move. It can be seen as a move that makes buying relevant Cisco hardware and services – as part of a wider investment in cloud infrastructure – as a sensible, decerebralised option.

For Cisco’s end users, be they Cloud Service Providers (CSPs) or major enterprises building private or hybrid cloud infrastructures of their own, the question needs to be `is there a good reason to consider suppliers or technologies that are outside the collection of partners working with our prime target service provider?’ If the partners have all chosen well, and are doing their jobs properly, the answer really should be `no’ in the majority of cases.

Cisco is making its own pitch at being the provider of the `one and only’ cloud architecture that any business needs, as is every other technology provider working in the cloud marketplace, including Parallels. But it is never going to be that way – and neither should it. After all, one of the key underlying arguments in favour of using cloud is to reduce – and ideally break clear of – technology lock-ins. Simply swapping one lock-in for another is hardly progress.

So partnering across as many stacks as possible gives Cisco the best position. It has a long-standing partnership with VMware, plus its own implementation of OpenStack. It now has its partnership with Citrix, which gives it a route through to users of the related CloudStack technologies.

Parallels brings three important additions to this mix. One is access to Microsoft, the second is access to another option on desktop virtualisation, and the third is a strong and growing penetration into an important channel component now becoming the main route to end users of all types– the CSPs.

The Microsoft connection is arguably a crucial hole in Cisco’s strategy till now. The two companies used to be close partners but became estranged when Microsoft, casting round for alternative lines of development, started to try and eat its partner’s lunch with steps like the acquisition of Skype. But with its cloud efforts such as 365 and Azure Microsoft now offers real potential, both in the market and to partners.

And from what Warrior said in Barcelona, it would appear that Cisco is smart enough to realise that all the cloud technologies in the world are worth zero if there is not some justification for using it all that is of real value for the end users. The reason Microsoft got into the cloud was it realised that online was not just the new applications software delivery method, but also an increasingly important option for consuming what the applications provide.

Cisco, I feel, understands that while it has much of the technology needed, it now needs to play ever more closely with those that provide the reason for using cloud. That is why it was a featured partner at Oracle’s OpenWorld in San Francisco last October, despite Oracle following Microsoft’s tactic of trying to eat Cisco’s lunch with moves like the acquisition of software-defined networking start upo, Xsigo, back in July.

And Parallels is particularly close to Microsoft. The majority of its senior management are ex-Redmond people, and it could easily be seen by its large CSP channel partner community as the personification of Microsoft here on earth. Parallels provides Cisco with a route through to users and deliverers of the applications which are still the mainstay of the majority of business users around the world, just as those users really start to move cloudwards.

When it comes to desktop virtualisation Cisco already has its VMware relationship, and now also one with Citrix. But Parallels plays here too, and perhaps more to the point Warrior made, the new trend is very much towards mobile. Parallels not only has its own mobile app tools, but also those of Microsoft.

Windows 8, with its commonality across platforms from mobiles to the largest workstations, and all stations in between, is Microsoft’s best shot yet at really breaking through into the mobile sector. This is particularly so in the corporate and business user markets, just as the BYOD movement gains a real head of steam. And let’s not forget that Cisco has some security capabilities here that may prove to be a useful offering in return.

This does also raise the question as to whether desktop virtualisation per se is about to become rather `so last year, dahlink’.

The channel component which Parallels brings cannot be ignored. It has a large and growing cohort of CSPs around the world that all build on Parallels’ cloud services management platform. These are the businesses that end users now interface with directly. Many of them will, in one way or another, become the brand names their marketplaces come to trust and turn to first.

This is where IT is at last following where many other businesses have trodden before. For example, one may buy a Vauxhall car, but the components that go to make it up come from all over, including companies that most would assume are direct rivals of Vauxhall. And the end users don’t care. They don’t care which company made the piston rings – they just expect them to keep working.

And so it is becoming with IT, and no one will care what components Cisco provided in a cloud delivery service. In time, most will not even have heard of the company. So finding routes into the heart of that delivery system is essential, and exactly what Cisco seems to be attempting now with this recent spate of moves.

As a final thought, Parallels has one other card up its sleeve, though for a few years in showed every sign of forgetting about it. This is APS, the Application Packaging Standard, a toolset that provides a way of packaging up applications with all the elements needed to be rolled out to CSPs, installed, and implemented by them on a click-of-a-mouse basis. In practice, of course, this really just means applications running on Microsoft servers and OS.

APS was launched a good few years ago, but for reasons that never became quite clear (though probably included fighting its corner in the battle for R&D funding) development stalled. Version 2 is expected to be revealed at the Summit next week, though back at the Summit in early 2010, this version was said to be coming out by the end of that year.

The delay notwithstanding, this tool could play a part in Cisco’s thinking. Its Citrix partnership gives it a route through to CloudStack, and it has its own implementation of OpenStack, both of which offer as a side capability a way of porting applications and services between different CSPs that use the same base environment.

This is an important capability, not least because many potential users see the cloud as a particularly dubious technology lock-in – one where they don’t even have the application on-premise to keep themselves working at least in the short-term. These capabilities can serve to at least reduce fear that the cloud means leaving delicate parts of the corporate anatomy firmly in the hands of third parties.

The following thought it purely speculative, but having the use of APS, CloudStack and OpenStack could put Cisco into a strong position, one where it is able to offer the user community a service which might be termed something like `Intermediary Central’, the ultimate service broker.

It could certainly put it in a position where it consider claiming to offer the `cloud of clouds’.

Posted in Business.


GE’s industrial analytics may be the key to big cloud management

Here’s an interesting little factoid: a General Electric GE90 jet engine, as found powering commonplace commercial airliners like the Boeing 777 and Airbus 330, produces as much data about itself in a day of operations as that produced by all the Tweets on Twitter in the same timescale. And those planes each have two engines, so one plane produces twice as much data a day as Twitter.

The company has now sold over 2,000 examples of this engine, though some 800 are still to be delivered. But that means just this one engine type is producing 1,200 times the amount of data Twitter produces, every day. And its competitors – the Rolls Royce Trent and Pratt and Whitney PW4000 – are no doubt capable of producing not dissimilar amounts of data about themselves.

The jet engines are also only a part of the industrial and energy production/transfer systems GE produces. These range from gas turbines used to power electricity generation and the complementary large generator sets through to a wide range of sensors and the analytics software needed to make sense of what the sensors discover. All of these are producing vast quantities of real time data.

All of this begs the question of what happens to all that data? The answer, I feel, has more than a little bearing on one of the fundamental issues facing all of cloud computing as it develops and penetrates deeper into the mainstream of business services and management, regardless of whether that business is industrial, financial, retail or service-oriented.

According to William Ruh, VP of GE’s Global Software Center in California, increasingly what happens to that data is that it is used to manage not just the real time operations of the systems, but also build models of future operation in an autonomic fashion. It allows analytical tools to effectively learn how the system operates and behaves, and gives it the raw material with which to predict what its future state is likely to be.

“With the jet engines, for example, it provides information on all the components and their future state,” he said. “It can predict that a component when a component is likely to fail, and as the dataset on the component grows, the predictions get more accurate. For example, they can tell that a component will last for one, two, or more trips across the Atlantic but will then need replacing or repair. That allows the maintenance crews to plan their work in advance, which is more efficient, much cheaper, and much more convenient to passengers than having to fix it after it has broken.

“It also means we can monitor the performance of the engines very closely, making them more efficient. Using our analytics we can normally reduce fuel consumption by around 2% per engine. Given that airlines can spend hundreds of millions of Dollars a year on fuel, that is a saving measured in millions straight onto the bottom line.”

And what have jet engines and large industrial systems to do with the cloud? Well, in a word, everything, for the fundamental large systems models behind them all are broadly the same. And with the coming of the Internet of Things marketing tag – how IPV6 has the capacity to give everything and anything a unique URL identification – everything can now communicate with everything else.

In the cloud, just as it large industrial systems or complex, safety-critical systems like commercial airliners, making sure all components within the system are a part of the real-time `conversation’ which ensures that all elements are working correctly, that none are doing what they should not be doing, or doing the right thing at the wrong time (both of which are good indicators of a security problem in IT systems), that profile of future use is known and satisfactory, and that their interaction with other systems shows that operation is neither affecting their operation nor affected by them.

Much is sometimes made – often correctly – about how `automation’ puts people out of work. But with the cloud, as well as with a wide range of complex industrial systems, there are arguably not enough people in the world to monitor and manage the systems in real time. The only possible way of achieving this is through automated services built around analytical tools working in real time.

This is particularly so with the cloud, where the actual resources providing the services a business might be using at any point in time could be spread round the world. One just has to look at Compuware’s Gomez service to see how many service providers can contribute to what the user thinks is just accessing one webpage. They all need to work properly for it all to work at all, and as the Internet of Things concept starts to make more significant contributions into business operation and management services, the need for a much bigger `all’ to keep working well in grow exponentially.

This is where GE may just have an important ace up its sleeve. As Ruh indicated, his team has come up with several suites of analytical tools for different industry sectors. But down at the heart of them all lays the same fundamental set of algorithms, regardless of the `machine’ being monitored.

So, in theory at least, the company already has the basic tools needed to manage much more than just `industrial systems’. Not only that but this is already working on a scale that would dwarf the needs of most enterprise IT requirements. What is perhaps more interesting, while Ruh can see the possibilities he and his team already has more than enough to contend with in its classic industrial back-yard.

It does make GE ripe for an IT partnership, of course. A business that could perhaps refine and optimise the fundamentals for a cloud management role, and then perhaps deliver it as SaaS.

This blog first appeared in Business Cloud 9

Posted in Business.


SaaS can avoid Tech-Upgrade Cliff, even for high value tasks

Against a background of now open-ended austerity, where investment not only needs particularly careful planning but a rigorous and quite possibly unsuccessful search for funds, there is one prediction that can be made for 2013 that may cast a shadow over the future plans for many enterprises, both large and small. That prediction is for the coming of the Tech-Upgrade Cliff.

Like the judiciously, if temporarily, escaped Fiscal Cliff, which could have tipped the whole US economy into a quagmire, the Tech-Update Cliff could tip many enterprises into a pit where further development of the business becomes hog-tied by an inability to upgrade their existing business critical applications.

 

The impact will certainly extend to the vendors of those business critical applications. Indeed, the impact could be much greater, because at least enterprise users have a potential way out.

 

The issues underpinning the growth of the Tech-Upgrade Cliff emerged during a discussion towards the end of last year with Host Analytics, a business specialising in delivering SaaS-based Corporate Performance Management (CPM) services.

 

The essence of the problem is based in the underlying austere economic climate, with investment funds hard to get even with good justification. This has led many user businesses to skip upgrades over the last four or five years. They have, until now, been able to `make do’ with their existing applications, even if they are approaching end of life. Sweating investments has become the mantra for enterprise IT departments.

 

But now three important changes are coming into play as a complementary set. One is that the latest versions of these applications are now appearing, and are incorporating capabilities that businesses would like to exploit, particularly in identifying and exploiting new business and revenue opportunities. The second is that, because the users have failed to follow the linear update path they will probably find themselves obliged to pay penalty charges to the vendors to upgrade to the latest versions.

 

Finally, next year will certainly see many of those older versions of applications being `sunsetted’ by the vendors; support will reduced to critical maintenance only, and even that will tend to be available only at premium prices, as the applications come to what the vendors see as their official ends of life.

 

The result of this trend will, therefore, see enterprises large and small faced with a choice – pay the going rate imposed on them by the applications providers, struggle on as they are, or find some alternative.

 

It is the alternative that Host Analytics’ CEO, David Kellogg, aims to present to those enterprises looking for a way to move on in the development and use of Corporate Performance Management. This is a application type targeted almost exclusively at the largest enterprises, where the need for what he calls `strategic financial management’, is at its greatest. It covers financial planning, financial consolidations, complex budgeting and strategic financial analytics. CPM systems are largely responsible for producing the numbers that financial regulators and stock market traders and analysts work with.

 

It is also a sector where on-premise applications, such as Hyperion, dominate and where concern about moving such capabilities to the cloud can be expected to be at its keenest.

 

“Business Intelligence users have been slow adopters of the cloud and CPM users have been even slower,” he observed. “Financial people are naturally conservative. But the upgrade issue is one important reason why it is now starting to happen, especially in the USA. We are getting a lot of interest from Hyperion users facing the update issue. Many of them are on Version 9 and the latest update is Version 11. What we offer is effectively Hyperion in the cloud.”

Undertaking this level of financial management in the cloud, is interesting, as data security, and in particular financial data security, is the common concern for most users when the subject of cloud utilisation comes up.

 

However, according to Ron Baden, the company’s VP of Services, Host Analytics undergoes annual SSAE 16 SOC1 Type 2 and SOC 2 audits, other security audits, and internal compliance self-assessments as the foundation of its compliance program for critical regulatory requirements.

 

“Doing what we do, we get audited by everyone,” Baden said. “Running financial applications in the cloud is a very serious business. Everything that happens on our service is logged and tracked so that it can be fully audited. Nothing happens without those who should know, knowing about it.”

 

This includes dealing with the one operational instance where on-premise is said to be better than the cloud – proving compliance that processes have run on the stated application version number. Auditors often insist on this for periods of time during certain business processes, such as annual audits. This is often thought to be all but impossible in a multi-tenanted, SaaS environment.

 

Host Analytics overcomes this by initiating temporary applications pods, virtual instances of the specific version number of the required applications. If different customers require to be locked down on the same version of a required application, that does give the company the chance to multi-tenant them in that single application pod.

 

This level of operational security is, according to Kellogg, proving to be one of the capabilities selling the Host Analytics service, despite the high-value nature of the business being handled. “They now seem to be going for the benefits of cloud,” he said, “including the fact that it is green compared to on-premise operations and there is no new hardware or software investment required which means that deployment is faster. In addition, they can keep up with the pace of innovation while eliminating the upgrade problem. And of course there is good security.”

 

Ron Baden added the observation that upgrading an on-premise application is now a time consuming and complex task. “It will normally require a great deal of change management operations, and can take at least six months to complete to the point where the business can function at the same level. So it will have effectively stood still for that time. We estimate that on-premise Hyperion takes a typical 2,000 man/days to implement from scratch, where scratch also includes undertaking a major upgrade.

 

“We estimate that users can move to Host Analytics for around one tenth the cost of trying to move to the next upgrade, and have it deployed and contributing to revenue much faster. We estimate the speed to value 6 months, rather that the four years or more needed for an on-premise installation. With SaaS they get the advantage that, as a SaaS vendor, we need our customers to stay with us for two or three years to make money, so we have a strong vested interest in making it work, and work well.”

 

Kellogg compared this to the sales pitches put forward by on-premise vendors, noting that their sales staff earn commissions geared to the up-front licence fees users are obliged to pay.

 

The approach is starting to spread the market for the company. While large enterprise Hyperion users remain the primary target, Kellogg indicated the company is now generating interest amongst what he would call the `SME’ community – businesses generating revenues of under $100m a year.

 

“They are not a core focus for us as yet,” he said, “but we are finding interest amongst users where their financial planning and budgeting tasks can no longer be carried out effectively by torturing Excel.”

This story was first published in Business Cloud 9 here

Posted in Business.




css.php