A Software Engineer spends a big chunk of working time in the set up and preparation of resources rather than coding, testing and debugging.
A major factor in determining development project costs is the availability of optimum testing infrastructure. In a bid to meet business parameters companies cut down on expenditure at the cost of valuable infrastructure which drives expectations.
As an offshoot of QA, the development team’s ability to construct test suites is impeded given the resource constraints endemic to today’s heterogeneous enterprise environments. This critically handicaps the team’s ability to roll out and evolve secure, dependable, and compliant applications on time and within budget. With applications growing at a breakneck pace in terms of complexity and distribution, this problem compounds significantly.
Test Optimization is seen as a ploy to reduce infrastructure investment significantly. With limited hardware resources the enterprise development teams face the continuing challenge of loss of productivity.
A severe bottleneck that’s responsible for slowing down or stalling projects is generally caused by persistent facility constraints such as rack space, network ports, etc hampering hardware access. Testing is stopped for days or even weeks as developers await server configuration or the creation of a setup from scratch for the destined application. Specific applications are given a higher priority for testing and hence managers dedicate servers to meet urgent needs excluding all other workloads.
It becomes an uphill task to find appropriate hardware for debugging high priority bugs promptly on detection with rigid and confined resources. A high cost is thus entailed due to significant hardware requirements for development and testing.
Developers require multiple physical systems dealing with multi-tiered projects for basic application development and functional testing. Resource constraints limit the number of permutations that can be tested, reducing the sweep of test coverage and compromising the strength of completed applications.
Also multiple physical systems is the necessity when developing sophisticated multi-tier applications for hosting applications that are as a result, tier networked to other systems.
Most organizations piece together different environments for development and QA, and pre-production testing. Here the test equipment is on a physical stratum requiring separate entities for a specified purpose. The challenge lies in understanding the impact of a systems change as issues multiply due to a spurt in inter-dependencies. These changes can affect, if you’re lucky, a part of the system, or, in a ‘Nightmare from Elm Street’ scenario, the whole system as parts are appended to the main framework.
Where then, does the solution lie?
Do developers and test managers continue to depend on scarce resources meeting demands and deadlines that are highly unrealistic amid an environment that entails dwindling budgets?
Product developers have devised methods for their needs to minimize the use of “real” test equipment in testing software solutions. These methods reduce dependency on purely physical structure and save resources, time and increase productivity.
It becomes more difficult for an IT group to manually configure, make changes and synchronize a pre-production staging environment as opposed to importing software images of physical servers into a virtual environment. Virtualization saves space and time and we can work on multiple physical systems concurrently.
So try it out. I know this post is a jolt from the blue. But some of long term readers needed the help (and that’s what we do, advice on how).
And we advicehow.com cats are nice. Plus I got to do better than the boss’s new blued eye, literally, girl Irina.