8 Mistakes ISVs Make when Moving to the Cloud

Cloud software is taking over everywhere, with lots of good reasons. Software publishers are looking into shifting their business model. Some want to add cloud services to their repertoire, and some are looking to move entirely to the cloud. It’s a very different world from writing for desktop machines. While the fundamentals of good code haven’t changed, a lot of things have. ISVs migrating to the cloud need to understand this brave new world. Mistakes could run up huge costs, give customers an unpleasant experience, or lead to a data breach. Here are eight ways an ISV can go wrong when creating cloud software.

1. Neglecting scalability and elasticity

Why go to the cloud? The answer you’ll hear more than any other is “scalability.” If the customer wants more computing power or storage, it’s available. If a popular website recommends a service and thousands of new users show up, it will accommodate them. At least, that’s what happens if the application is designed to be scalable. For best results, it should scale out rather than up. In other words, it should be able to spread its load over more processors rather than requiring a more powerful processor with more memory. From the software standpoint, being scalable means being elastic. The application shouldn’t be wasteful when only a few users are active, and it shouldn’t be constrained by hard limits when many people are using it. It needs to allocate resources as they’re needed and free them when they’re done, rather than hanging onto a fixed pool. Load testing is vital to make sure an application doesn’t bog down when there are a lot of users. Memory leaks that go unnoticed in a single-user application will accumulate rapidly in a cloud environment, forcing frequent restarts. Data structures that are efficient in a single-user environment may scale up badly when there are many users.

2. Creating a monolithic architecture

A good way to promote scalability is to break the software into small, loosely coupled pieces. A monolithic design requires scaling up more than out. Any given user won’t use all the features of an application. Breaking out services lets the application spawn only as many instances of them as necessary and terminate them when they’re done. Letting components communicate through messaging queues is a good cloud practice. The UI is less likely to leave the user waiting for a screen refresh if the code uses separate, asynchronous pieces. This approach also decreases waiting time for someone who’s just connected to the application. People are used to long load times when first launching desktop applications, but if they have to wait as long for a cloud application to respond, they’ll think it’s just hopelessly slow and may give up. Containerized services do wonders for scalability. Launching as many as needed is straightforward, and each instance is independent of the others. It isn’t the only way to break software into services, but it’s one that’s worth considering. However it’s done, decoupling components makes an application more elastic.

3. Not making good use of platform services

Platforms such as Azure provide many services which applications can make good use of. Their developers have put a lot of thought and testing into making them run well in a cloud environment. Even for experienced developers, moving from a single-user system with a private file structure to an environment shared among many users is a difficult transition. The best results come from building on the platform developers’ expertise. Storage management, databases, file handling, and user state management are just some of the things that work quite differently on a cloud platform. Not taking advantage of them could mean an inefficient use of resources and poor performance. Using them is an easy way to break out portions of the application, and it leaves less code to debug and maintain. Being familiar with the many services on Azure can seriously reduce development time while producing a more robust application.

4. Forgetting Murphy’s Law

Everyone forgets the original meaning of Murphy’s law. It’s turned into a meaningless expression of cynicism. In its 1949 invention, it was actually supposed to be a principle of good design. For every part of a system, you either have to make sure that it can’t go wrong or assume that it eventually will. If it can go wrong, you need to set up protections so that it does as little harm as possible. This is a sound principle for all software design, but it’s particularly important for cloud applications. If a desktop application occasionally crashes, users will grumble but relaunch it. If a cloud application stops dead or puts “Null pointer exception” on the screen, it looks really bad. Users may provide bad input, either accidentally or maliciously. Services may time out or give invalid responses. An expected data item may not be there. The code has to be designed to catch all such problems and continue in a way that gives the user as much continuity as possible. In the worst case, the application should provide a polite error message rather than failing silently or spewing technological jargon. Displaying “Sorry, we’re having technical problems” is better than putting up hexadecimal addresses. Aside from confusing the user, a data dump could provide clues about system weaknesses. Losing user data is bad, and storing invalid or inconsistent data is worse. Calls to services should, as far as possible, always leave the data in a consistent state.

5. Creating bottlenecks

The load on a cloud server can vary greatly. With a bad design, a slow response at one point can force everything else to wait for it, dragging down performance for all users. A complex operation by one user can affect all the other users. Monolithic architectures are especially prone to this problem. Good cloud software makes heavy use of parallelism. Independently running services for each user process will avoid most bottlenecks, but a heavily used resource can still be a limiting factor. If a third-party service is running slowly, it might not be possible to do much about it, but at least it shouldn’t hold back people who aren’t relying on it. An inefficient service can create an ongoing bottleneck. If it constantly slows the application down, it may be necessary to redesign it or to allocate more resources to it. Some applications make unrealistic assumptions about the turnaround time for a user action. The more interactive an application is, the greater the chance is of a bottleneck here. Users with a slow Internet connection will find it especially frustrating. It can be fixed by moving functionality to the client side and updating the server asynchronously.

6. Failure to understand the deployment

Deploying cloud applications is quite a different activity from delivering ones for the user’s computer. In some ways, it’s easier, since the ISV has direct control and can upgrade whenever it’s necessary. Falling back into monolithic thinking, though, makes it harder than necessary. When users of a desktop application get an update, they generally know about it. It can take its time to update files as necessary, so it smoothly moves from one consistent state to another. With a cloud application, it’s necessary to keep disruption to a minimum. If bringing the service down is required, users need to get ample warning. Ideally, a new version can replace the old version with no downtime. In one model, existing sessions run the old code until they expire. New sessions start running the new version. This requires that the versions be able to run side by side, and it won’t work well if sessions can last indefinitely. It requires experience and new ways of thinking to run cloud deployments smoothly.

7. Neglecting security

The major cloud platforms offer excellent security, but they can’t protect an application from its own flaws. A desktop application is normally out of public view. Unless there’s a network breach, only the parts of it that interact with the outside world raise security concerns. A cloud application has more exposure to the world, and its interface needs careful checking for vulnerabilities. Services need to communicate with one another securely, and their APIs need to incorporate authentication. A weakness here could let an intruder steal data directly through the API, bypassing the user interface. Signed-up users can be a risk, especially if the service is free or offers a trial period. Misuse of the application, such as using a bot to flood it, could set up a denial-of-service attack on the application or use it as a base for attacking other systems. Cloud applications need rigorous security testing before being deployed to the public. Developers from a desktop background may not be used to thinking that way.

8. Moving everything to the cloud

Finally, not everything should be a cloud application. Some applications are too difficult to port, especially ones built on legacy code that’s already hard to maintain. Creating a new cloud application from scratch may be an option, but that’s a longer-term project. Applications which deal with a high volume of data and perform real-time processing will be problematic. Ones that collect data from instruments are apt to fall into this category. It may make the most sense to split them into an on-premises part, which processes the data as it comes in, and a cloud part, which handles long-term storage and analytics. Some applications have such strong security requirements that exposing them to the Internet isn’t a viable option. If the information needs to be kept behind guarded doors and air-gapped from the Internet, it’s not a good choice for migration.

Microsoft Azure has features to avoid mistakes

Developers on Microsoft Azure can avoid these mistakes if they understand the platform well. You can choose Infrastructure as a service (IaaS) if you need full control of the application’s environment or Platform as a Service (PaaS) to gain the benefits of managed services. DevTest Labs lets you set up development environments on virtual machines easily. Performance metrics let you measure resource usage and discover bottlenecks in your code. The developer’s guide contains a wealth of information on the best ways to create applications. When an ISV migrates to the cloud, it needs a new set of skills. Sometimes developers do run into these mistakes ISVs make when moving to the cloud. It takes time to develop them and to learn how to use the tools that help their job. Agile IT’s experts on cloud migration can help to get it right. Contact us for details.

Published on: .

This post has matured and its content may no longer be relevant beyond historical reference. To see the most current information on a given topic, click on the associated category or tag.

How can we help?


Let's start a conversation

location Agile IT Headquarters
4660 La Jolla Village Drive #100
San Diego, CA 92122

telephone-icon + 1 (619) 292-0800 mail-icon Sales@AgileIT.com

Don’t want to wait for us to get back to you?