Companies that rush into IoT often find themselves with data overload, according to the local head of application performance management provider, AppDynamics.

But it’s a situation that can be easily avoided by establishing a business case for the IoT application first, said Andrew Brockfield, AppDynamics’ country manager for Australian and New Zealand.

“It all starts with the fundamental question of why. A lot has been discussed around IoT with use cases, but just because technology is capable of doing something doesn’t mean it necessarily makes sense for you to adopt it for your business,” Brockfield told IoT Hub.

“I’ve seen many solutions that have been built around that idea that they have to gather everything, when the inherent value in the data might only lie in five to ten percent of it.

“We sometimes get romanticised by what we’re capable of doing as opposed to the value of the output of the process and the technology used.”

He said that the IT industry has historically been guilty of “capturing everything, just in case”, but IoT can provide tools to ensure that important data is not discarded while keeping associated data collection and management costs down.

“One possibility is to design systems that allow you to turn data streams on or off at will, either at the device level or the application level,” he said.

“That way, if a new business case for data that you can collect is created, for example, you can easily flip the switch and generate value from that data more quickly.”

Don’t forget about performance and reliability

Brockfield said that the sheer volumes of data and transactions that software is now being asked to process in IoT environments also leads to challenges in maintaining high-performing and reliable systems.

“The ability to scale and manage vast amounts of data and processing is one of the biggest challenges for us and our clients,” he said.

“2016 is a time when users have become critically reliant on software, and when it’s not there, it tends to have a significant impact.”

Brockfield said that application performance is closely tied to reliability, and that availability with poor performance can have similar consequences to no availability at all.

He added that the importance of software reliability is just as high as that at the hardware and infrastructure levels, due in part to the expectations that consumers now have in their interactions with technology.

“I think we as users and consumers of applications have become a lot less tolerant of delays, particularly if you’ve got a younger demographic as a user base,” he observed.

“If it was good enough 12 months ago from a user perspective, typically in most cases it’s not going to be good enough today.”

What can be done?

This volume of activity can quickly overwhelm any attempts to keep tabs on it all, so Brockfield suggests a strategy of “sorting out the wheat from the chaff.”

“IoT brings with it an amazing amount of data and transaction volume, and it’s often not physically possible to put the same degree of analysis on every transaction,” he said.

“So you need to sort out the good from the bad, the interesting insights from the uninteresting.”

He said that you can solve this problem by defining business transactions and monitoring all of them, but only generating alerts for those which go wrong.

“Typically there’s a negative consequence with these instances, whether it be a user that’s dissatisfied because performance isn’t where it needs to be, or they’re not getting the result they expected because the transaction has errored or stalled,” he said.