Bio-inspired automation technology – limitless possibilities

Bio-inspired automation technology – limitless possibilities

The ingenuity of Nature has always intrigued scientists and has inspired generations of inventors to understand how natural processes work and to apply that learning to come up with innovative bio-inspired solutions.  We thought we would, in this blog, talk about some of the work that has been going on and the “bio” inspiration driving that research.

THE ANTS

A project developed at the MIT Artificial Intelligence Laboratory, the aim is to develop a community of cubic-inch micro-robots, which should form a structured robotic community and incorporate social behaivor in this community.  The robot is equipped with 5 different sensors including IR, food, tilt, bump and light.  The inspiration came from the natural Ant colonies.  From a technical perspective, it was developed to accommodate ‘n’ number of hardware in such a small volume (1 cu.in.).  The microprocessor was just 8-bit running at 2MHz with EEPROM, equivalent to the first IBM PC.  The important point to consider is that, it not only runs its own code, but in parallel is communicating with its environment and other robots as well, which creates complications.

SWARMs

The main goal of the project is to develop a framework and methodology for the analysis of swarming behavior from biology and the synthesis of bio-inspired swarming behavior for engineered systems.  The idea was developed by Prof. Vijay Kumar from the University of Pennsylvania, to answer some of the questions related to bio-inspired swarms: Can large numbers of autonomous vehicles, work in a group to carry out a prescribed mission with or without a leader? Can they change their roles working in some hostile environment?

The inspiration for idea was to develop the high end algorithms, control laws and hardware to perform specified tasks as well as switching decisions by communicating with other group members. For example, Control was developed based on Hydrodynamics models (where swarm is assumed as an incompressible fluid, which is again interdisciplinary), and algorithm for task allocation without communication..

DIVISION OF LABOR

Another similar project is underway at the Laboratory of Intelligent systems at EPFL, Switzerland.  The project focuses on the evolutionary dynamics of fixed and adaptive mechanisms from both an engineering and a biological perspective.  The aim is to reveal the complexities arising from multiple agent interactions, development of distributed control algorithms and comparing with the engineering explainations for the colonial traits observed in ants, bees etc.

Amid research, they developed a framework based on artificial neural networks for modeling task allocation which includes workers’ behavioral flexibility to stimulus and colony response to varying stimuli. The bio-inspired models create a new perspective to the task completion in teams by agent swapping.  Similarly, in other domains of team work, bio-inspired theories help in the invention of new models.

The above examples are presented to emphasize the fact that there are lots of theories still untouched in nature which can be applied to any domain.

There are numerous examples in nature where things have structural flexibility like leaves, skin etc. Along similar lines, researchers have developed soft actuators/sensors which are fabricated on a flexible sheet at the Wyss Laboratory in Harvard.  One of the uses of actuator/sensor is to calculate the stretching of material on which it is mounted.  Another example is in the improvement in communication protocols for communication between machines working collectively.  New mathematical models are developed for spatial relationship to indentify the relative position in the dynamic systems. Similarly, a new class of algorithms, inspired by swarm intelligence, is currently being developed that can potentially solve numerous problems in communication networks like increasing network size, rapidly changing topology, and complexity.

As mentioned earlier, there is no end to learning from nature … and the quest will go on.

Machine automation – revisiting the history

Most of the earliest automatic systems were developed in Greece, and one of them that survived is the “Antikythera mechanism” (150-100 BC) which was designed to calculate the positions of astronomical objects.   The next level of breakthrough was the first programmable machine, developed in 60 AD, by a Greek engineer called Hero (Article).  He constructed a three-wheeled cart that performed stunts in front of an audience.  The power came from a falling weight that pulled on string wrapped round the cart’s drive axle, and this string-based control mechanism is exactly equivalent to a modern programming language (Video).

The above examples are mainly presented to demonstrate “how the advancement in mechanization techniques” happened more than 2000 years ago.  The word automation itself comes from the Greek word Automaton (acting on one’s will).  It is used to describe non-electronic moving machines, especially those resembling human or animal actions.  From the examples available in literature, earlier motivation for automation was mainly for entertainment.  It was a point of pride for a town or kingdom to have these mechanical machines.  Many examples are available on the Wikipedia pages of Automaton (Ancient Automaton).

The era of automatons for amusement and pleasure continued till 17th – 18th century, when the need for automation during the “Industrial Revolution” or “Machine Age” was noticed.  With the invention of new energy technologies (steam engines, spinning jenny, water frame etc.), better systems/mechanism were developed especially around industries like weaving, milling, power generation etc. During the Industrial Revolution, the mechanization of systems reduced manual intervention and increased productivity. Hence, the main motivation of automation (apart from fun) was the development of productive systems in the vicinity of the similar technology. (Link)

Once the invention of the transistor occurred in the 1950s, numerical control of machines was possible and automation really began to fly.  The systems were transformed from open loop to closed loop with help of electronics.  Even systems started developing around electronics because it makes processes faster (eventually productive).  Quite surprisingly, the focus on mechanization shifted to electronics.  Purely Mechanical Systems were transformed to Electro-Mechanical systems, development of communication systems, and development of software domain started, and ultimately the invention of web services.  Because of development of such complex systems, the motivation for automation has branched itself into three parts, namely:

  • Automation for making process efficient – Examples from automated factories FANUC), where the light-out manufacturing started from 2001, robots are manufacturing other robots and the factory can run unsupervised for as long as 30 days.
  • Automation for making systems secure – Ballot box and ATM are the best examples where automation has helped the technology to work with security.
  • Automation for solving complex problems – There was a time when flight was very difficult, but with advanced automation practically complex, “Automated guided flight vehicles (unmanned)” are possible.

In recent years and in the near future, with the exponential development in Artificial Intelligence, complex algorithms, approaches to solve NP hard (nondeterministic polynomial time) problems, etc., automation has evolved into different levels.  There are also improvements in fabrication of micro-electronics components and along with flexible electronics technology, machines and automation can go to the miniature level.  In the years to come, the motivation for automation will be to make self dependent systems, self learning systems, systems which can work in teams and compact systems. Some of them are already developed like ASIMOs in research labs (Video). Upcoming areas for development (from practical perspective), are the robots/machines like equivalent to insects, animals, humans working in all terrains/environments, communicating with their similar machines.

Trends in Storage – Transition to All-Flash & beyond

The use of flash in storage devices is not new; however, we are now seeing the increased use of flash compared to disk storage.  Almost all storage companies now provide hybrid solutions that have a mix of SSDs and HDDs in their boxes.  Those that don’t are completely switching over to leverage the advantages of SSDs.  As the costs of SSDs plummet further, we will see SSD being used more aggressively in storage boxes.  Companies like Avere, Marvell, Starboard are providing unique offerings with SSD supported devices. Soon, companies like XtremIO (recently acquired by EMC) with all-flash products will enter the fray.  Looking forward, there are some new memory technologies that could potentially replace flash in years to come.

Flash Technology

NAND flash technology is only a decade old. However, it has already gained significant traction due to its mechanical characteristics and performance. SSDs with NAND flash have a number of advantages over HDD devices. Some of them are:

  • Power savings of 2x
  • No noise
  • No vibration(since there are no moving parts)
  • Very little heat
  • About 30% faster than HDDs
  • Magnetic field safe

SSD costs, although reducing, are still higher than HDD costs. This is the only factor that is preventing a complete replacement of HDDs in storage products. See this article in Storage Review for a detailed comparison between SDD and HDD drives.

SSD offerings

Storage companies are already offering several solutions built around SSD drives in their storage servers and boxes. There are also ways in which SSDs can be utilized in the storage environment in a transitional manner while improving value proposition for customers.

Major vendors like EMC Corp. and NetApp Inc. have placed flash memory in their storage arrays and designed controller software to use the flash memory as a cache. EMC Fast (Fully Automated Storage Tiering) Cache improves the performance of existing SATA drives/FC and SAS drivers as well. NetApp on the other hand uses FlashCache to improve performance. This also compensates for the performance penalty due to their de-duplication technology (designed for capacity optimization). See this article by Joerg Hallbauer for a nice comparison between these technologies.

Avere Systems and Marvell, take a different standpoint. Avere’s FXT caching appliance sits between NAS arrays and clients.  Ron Bianchini, founder and CEO of Avere Systems claims that the appliance delivers 50 times lower access latency then existing NAS devices. Marvell’s Dragonfly VSA is designed to be placed inside the server. It uses NVRAM and SSD caches for random write handling.

Storage vendors are also transforming their fixed RAID systems to automatically tiered storage devices.  EMC’s FAST Virtual Pool is an example of a device in this category.  It places only data that requires high speed access to SSD drives while data that is only moderately used is placed on SAS drives.  Starboard Storage in its AC72 system also utilizes SSDs and HDDs with automated tiering. Data that is less frequently used is targeted towards HDDs.

By moving “hot” data to faster storage devices, tiered storage systems can perform faster than similar devices without the expense of widely deploying these faster devices.  Conversely, automated tiering can be more energy- and space-efficient because it moves “bulk” data to slower but larger-capacity drives.

Storage vendors are also coming up with “All Flash” products – despite the costs involved—to cater to customers that demand speed.  EMC announced “Project X” recently that utilises XtremIO technology to provide an all flash storage box that is fast, and uses in-line de-dup technology.

Future Memory Technologies

Even while we are considering the current industry trend towards flash SSD based devices, there are future technologies that could disrupt flash. Potential successor technologies to flash include Resistive RAM (RRAM), Magnetoresistive RAM (MRAM) and Phase-change memory (PCM).  But, more about these memory types in a different article.

Usage of flash in storage devices is not new; however, we are witnessing increasing usage of flash as compared to disk storage as a trend. Almost all companies now are providing hybrid solutions that have a mix of SSDs and HDDs in their boxes. Those that don’t are transitioning over to leverage the advantages of SSDs. As costs of SSDs plummet further, we will see SSD being used more aggressively in storage boxes. This short article talks about the current state of products in this space and how companies like Avere, Marvell, Starboard are providing unique offerings with SSD supported devices. Very soon we will also have companies provide all-flash products as is evidenced by acquisition of XtremIO by EMC. We briefly also touch upon which memory technologies can potentially replace flash in years to come.

Flash Technology

NAND flash technology is only a decade old. However, it has already gained significant focus due to its mechanical characteristics and performance. SSDs with NAND flash have a number of advantages over HDD devices. Some of them being:

  • Factor x2 Power savings
  • No noise devices
  • No vibration devices (since there are no moving parts)
  • Very little heat produced
  • About 30% faster than HDDs
  • Magnetic field safe

However, currently SSD costs, although reducing, are still higher than HDD costs. This is the only factor that is preventing a complete replacement of HDDs in storage products. See article on Storage Review for a detailed comparison between SDD and HDD drives.

SSD offerings

Storage companies are already offering several solutions around SSD drives in their storage servers and boxes. There are ways in which SSDs can be utilized in storage environment in a transitional manner while improving value proposition for customers.

Major vendors like EMC Corp. and NetApp Inc. have placed flash memory in their storage arrays and designed controller software to use it as a cache. EMC Fast (Fully Automated Storage Tiering) Cache improves the performance of existing SATA drives/FC and SAS drivers as well. NetApp on the other hand uses FlashCache to improve performance. This also compensates the performance penalty caused due to their de-duplication technology which is designed for performing capacity optimization. See article by Joerg Hallbauer for a nice comparison between these technologies.

Avere Systems and Marvell Technology Group Ltd, take a different standpoint. Avere’s FXT caching appliance sits between NAS arrays and clients. Ron Bianchini, founder and CEO of Avere Systems claims that the appliance delivers 50 times lower access latency using customer’s existing NAS devices. Marvell’s Dragonfly VSA is designed for placement inside the server itself. It uses NVRAM and SSD caches for random write handling.

Storage vendors are also transforming their fixed RAID systems to automatically tiered storage devices.

EMC’s FAST Virtual Pool is an example of a device in this category. It places only data that requires high speed access to SSD drives while data that is only moderately used is placed on SAS drives.

Starboard Storage in its AC72 system also utilizes SSDs and HDDs with automated tiering. Data that is less frequently used is targeted towards HDDs.

By moving “hot” data to faster storage devices, tiered storage systems can perform faster than similar devices without the expense of widely deploying these faster devices. Conversely, automated tiering can be more energy- and space-efficient because it moves “bulk” data to slower but larger-capacity drives.

Storage vendors are also coming up with “All Flash” products despite the costs involved to cater to customers that demand speed. EMC announced “Project X” recently that utilises XtremIO technology to provide all flash storage box that is fast, and uses in-line de-dup technology.

Future Memory Technologies

Even while we are considering the current industry trends towards flash SSD based devices, there are future technologies that can disrupt this current trend towards flash. Potential successor technologies to flash include Resistive RAM (RRAM), Magnetoresistive RAM (MRAM) and Phase-change memory (PCM). But, more about these memory types in a different article.

VM Portability – OVFTool

VM Portability has become of even more importance after the evolution of virtualization in cloud computing.  It doesn’t only include pushing around virtual images but also various configurations of application, data, identity, security, or networking.  Even if all the components were themselves virtualized, simply porting the virtual instances from one location to another is not enough to assure interoperability.  This is because the components must be able to collaborate and this requires connectivity and other configuration information.

One of the solutions to this includes the VMWARE’s OVF/OVA formats. OVF is claimed to enable efficient, flexible, and secure distribution of enterprise software, facilitating the mobility of virtual machines and platform independence (Xen, KVM, Microsoft, and VMware etc).

What follows is an attempt to use it for VMWARE. Generation of OVF/OVA from a virtual machine using vShpere client turned out to be a simple process, which included selecting the VM in vSphere and clicking “Export to OVF” from main menu.

However, this is a manual process, and doesn’t help if we need to do it repeatedly for different builds of software or a large number of VMs.  To automate this process, we need some kind of CLIs to be able to call them in a script. VMWARE’s OVFtool provides us with that capability (and much more, which we will discuss in future posts).  So I tried to automate this through shell and perl scripts.  Both the scripts install a new RPM to a CentOS VM and then export it to OVF format.

SHELL Version:
#!/bin/sh
#
# Description: To install an RPM to the CentOS VM (ip=$1) & Export it
# to OVF format
#
if test $# -ne 3
then
echo “Usage – $0 vmIP rpmPath esxIP”
else
fileName = $(basename “$2”)
if ssh $1 “scp 192.168.112.132:$2 .;rpm -Uvh $fileName;exit;”
then
ovftool “vi://root@$3/CentOS” “/home/CentOS.ovf”
else
echo “Unable to connect to $1”
fi
fi

PERL Version:
#!/usr/bin/perl -w

use Net::OpenSSH;

if ($#ARGV != 2 ) {
print “usage: perl installRPM_ExportVM2OVF.pl vmIP rpmPath esxIPn”;
exit;
}
$vmIP=$ARGV[0];
$rpmPath=$ARGV[1];
$esxIP=$ARGV[2];

my $sshCentOSVM = Net::OpenSSH->new($vmIP, user => ‘root’, password => ‘PASSWORD’);

$sshCentOSVM->error and die “Unable to connect to remote host: ” . $sshCentOSVM->error;

my @values = split(‘/’, $rpmPath);
my $fileName = $values[$#values];

$sshCentOSVM->system(“scp 192.168.112.132:$rpmPath .;rpm -Uvh $fileName;”);

system(“ovftool vi://root@” . $esxIP . “/CentOS /home/CentOS.ovf”)

Upcoming posts will include the usage for other hypervisor platforms.

Trends in Storage – Phase Change Memory (PCM)

What is Phase Change Memory?

Phase change memory (PCM) is an emerging non-volatile solid-state memory technology employing phase change materials.  It has been considered as possible replacement for both flash memory and DRAM but the technology still needs to mature before it can be put to production usage.

We may not realize it, but we are already using phase change materials to store data – they are used in re-writeable optical storage, such as CD-RW and DVD-RW discs.  For optical drives, bursts of energy from a laser put tiny regions of the material into amorphous or crystalline states to store data. The amorphous state reflects light less effectively than the crystalline state, allowing the data to be read back again.

Phase change materials, such as salt hydrates, are also capable of storing and releasing large amounts of energy when they move from a solid to a liquid state and back again.  Traditionally, they have been used in cooling systems and, more recently, in solar-thermal power stations, where they store heat during the day that can be released to generate power at night.

However, there are additional properties of PCM that are being researched that may allow for new and exicting use of these materials.

For memory devices it is not their thermal or optical properties that make PCMs so attractive. Instead it is their ability to switch from a disorderly (or amorphous) state to an orderly (or crystalline) one very quickly.  PCM memory chips rely on glass-like materials called chalcogenides, typically made of a mixture of germanium, antimony and tellurium.  In PCM the pronounced change in electrical resistivity when the material changes between its two stable states, namely the amorphous and poly-crystalline phases, is used.

Promise of PCM

With a combination of speed, endurance, non-volatility and density, PCM can enable a paradigm shift for enterprise IT and storage systems as soon as 2016.  The benefits of such a memory technology would allow computers and servers to boot instantaneously and would significantly enhance the overall performance of IT systems.  PCM can write and retrieve data many orders of magnitude faster than flash, enable higher storage capacities, and also not lose data when the power is turned off.

Phase change materials are also being considered for the practical realization of ‘brain-like’ computers where a PCM cell is used to act like a hardware neuron and to have a synaptic like functionality via the ‘memflector’, an optical analogue of the memristor.

How does Phase Change Memory work?

PCM memory chips consists of chalcogenide sandwiched between two electrodes. One of the electrodes is a resistor which heats up when current passes through it.  A gentle pulse of electrical energy causes the resistor to provide heat and thereby causes the chalcogenide to melt. As the material cools, it forms a crystalline structure. This state corresponds to the cell storing a “1”. When a short, stronger pulse is applied, the chalcogenide melts but does not form crystals as it cools.  It assumes a disorderly amorphous state corresponding to be “0”.  The amorphous state has higher electrical resistance than crystalline state.  Hence, PCM memory cells are also sometimes referred to as “memristors”.  This complete process is reversible and controlled by the application of currents.  Hence, the PCM cell can switch between “0” and “1” over and over again.

If the amount of current provided to PCM can be controlled, then chalcogenide enters an intermediate state which is a combination of amorphous and crystalline phases.  This is the principle of multilevel PCM which can store multiple bits of information in a single cell.

IBM researchers have built PCM memory chips with 16 states (or four bits) per cell, and David Wright, a data-storage researcher at the University of Exeter, in England, has built individual PCM memory cells with 512 states (or nine bits) per cell. But the larger the number of states, the more difficult it becomes to differentiate between them, and the higher the sensitivity of the equipment required to detect them, he says.

When was PCM discovered?

Although the concept of Phase Change Materials came along some 40 years ago, it was only in 2011, that scientists at IBM Research demonstrated that PCM can reliably store multiple data bits per cell over extended periods of time.

What is the performance of Phase Change Memory?

PCM exhibits highly desirable characteristics, such as rapid state transition, good data retention and performance, as well as future scaling to ultra-small device dimensions.  Writing to individual flash-memory cells involves erasing an entire region of neighbouring cells first.  This is not necessary with PCM memory, which makes it much faster.  Indeed, some prototype PCM memory devices can store and retrieve data 100 times faster than flash memory.

Another benefit of PCM memory is that it is extremely durable, capable of being written and rewritten at least 10m times.  Flash memory, by contrast, wears out after a few thousand rewrite cycles, because of the high voltages that are required to move electrons in and out of the floating-gate enclosure.  Accordingly, flash memory needs special controllers to keep track of which parts of the chip have become unreliable, so they can be avoided.  This increases the cost and complexity of flash, and slows it down.

PCM is also inherently fast because the phase-change materials can flip their phase very quickly, in the order of a few nanoseconds.  Recently it has been shown through simulation materials that these phase-change mechanisms can happen on the sub-nanosecond time scale as well.

In addition, PCM offers greater potential for future miniaturisation than flash.  As flash-memory cells get smaller and devices become denser, the number of electrons held in the floating gate decreases.  Because the number of electrons is finite, there will soon come a point at which this design cannot be shrunk any further.  PCM offers a radically different approach.  With PCM, the changes between the crystalline and amorphous states don’t involve the movement of electrons.  Therefore, by nature, phase change is less harmful to the material and it doesn’t deteriorate as easily over time as flash.

The IBM research team believe that the multi-level phase change memory technology could be ready for use by 2016.

How will PCM be used?

Replacing flash is not going to be easy though.  Flash technology has a huge customer base.  As of today, flash is the most advanced technology of all the solid-state technologies out there. However, Flash and PCM may play in different spaces.  PCM could serve as the main memory for enterprise class applications due to its very high endurance and better latency properties.  PCM could also complement DRAM in future products where instead of using a small DRAM, there could be a bigger pool with PCM and DRAM, with the DRAM serving as a cache for the PCM.

At the same time, some of the biggest memory manufacturers are already considering moving to PCM as a replacement for NOR flash (used in cell phones).  NOR flash stores source code.  Because NOR flash is reaching the end of its scaling pathway, this is one area where people think that PCM can enter the market.

The technology could benefit applications such as “big data” analytics and cloud computing.

Operating systems, file systems, databases and other software components need significant enhancements to enable PCM to live up to its potential.  Studies show that any piece of software that spends a lot of time trying to optimize disk performance is going to need significant reengineering in order to take full advantage of these new memory technologies.

Who is leading the work on Phase Change Memory?

Companies like Micron Technology, Samsung and SK Hynix—the three giants of digital storage—are already applying PCM inside memory chips.  The technology has worked well in the laboratory for some time and is now moving towards the mainstream consumer market.  Micron started selling its first PCM-based memory chips for mobile phones in July, offering 512-megabit and one-gigabit storage capacity.

IBM is now working with SK Hynix to bring multi-level PCM-based memory chips to market.  The aim is to create a form of memory capable of bridging the gap between flash, which is used for storage, and dynamic random-access memory, which computers use as short-term working memory, but which loses its contents when switched off.  PCM memory, which IBM hopes will be on sale by 2016, would be able to serve simultaneously as storage and working memory—a new category it calls “storage-class memory”.

Conclusion

PCM promises to be smaller and faster than flash, and will probably be storing your photos, music and messages within a few years.

PCM memory does not merely threaten to dethrone flash, in short, it could also lead to a radical shift in computer design—a phase change on a much larger scale.

References

  • The paper “Drift-tolerant Multilevel Phase-Change Memory” by N. Papandreou, H. Pozidis, T. Mittelholzer, G.F. Close, M. Breitwisch, C. Lam and E. Eleftheriou, was recently presented by Haris Pozidis at the 3rd IEEE International Memory Workshop in Monterey, CA.
  • The Economist: “Phase-change memory, Altered states”, Q3 2012
  • IBM Research, Zurich. “IBM scientists demonstrate computer memory breakthrough”
  • Search Solid State Storage. “UCSD lab studies future changes to non-volatile memory technologies”
  • Search Solid State Storage. “New memory technologies generate attention as successor to NAND flash”
  • Arithmetic and Biologically-Inspired Computing Using Phase-Change Materials by C. David Wright, Yanwei Liu, Krisztian I. Kohary, Mustafa M. Aziz, Robert J. Hicken

Spotlight – XtremIO

Introduction

On May 10, 2012, EMC announced that it acquired privately held XtremIO.  This article talks about XtremIO, the technology, the reasons behind the acquisition, and what it means for other big players.

About the Company

XtremIO is based in Herzliya, Israel (“The Start-Up Nation”). It was founded in 2009 and has raised $25 million in venture capital funding.  It provides an “All-flash” technology product built from the ground up using data reduction techniques such as inline deduplication to lower costs and save capacity.

It competes against other all-flash array makers such as Solid Fire, Texas Memory Systems (TMS), Violin Memory, Nimbus Data, Pure Storage and Whiptail.

Technology

XtremIO describes its own all-flash array as having a scale-out clustered design where additional capacity and performance can be added when needed.  It also has no single point of failure and supports real-time inline data deduplication.  All-Flash means that the XtremIO system supports high levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments, with consistently low (sub-millisecond) latency.  It also has integration to VMware through VAAI.

XtremIO won a 2012 Green Enterprise IT award from the Uptime Institute for IT Product Deployment.

Acquisition of XtremIO by EMC

Israel-based companies, for the most part, are not great at selling – what they are great at is engineering.  Companies like EMC and NetApp, have big sales channels and can pick up small Israeli start-ups for less money for their technology only.  The XtremeIO acquisition was reported to be valued at $430 million.

EMC and XtremIO also have natural ties in part because XtremIO co-founder Shuki Bruck sold his previous company Rainfinity to EMC.

Big competitors, including NetApp, HP, Dell, IBM, and Hitachi Data Systems may feel pressured to get in the game and look for such companies to acquire, reports Derrick Harris.  Indeed, NetApp was reportedly also trying to make a bid for XtremIO.

EMC Advantage

All-flash arrays are expensive, high-performance systems built for applications requiring high throughput, such as relational databases, big data analytics, large virtual desktop infrastructure or processes requiring large batch workloads like backups.

Flash arrays can deliver high performance using a relatively small amount of rack space, power and cooling.

The all-flash array of the type XtremIO offers will give EMC faster performance across both virtualized and big data environments, meaning it will also help EMC’s subsidiary VMWare, which focuses on virtualization. Combined with EMC’s server-side PCI flash product called Project Lightning, which keeps hot data in an SSD cache sitting alongside the processor, that’s one powerful hardware platform for tomorrow’s applications.

EMC needed new technology, and rather than develop it in house, it chose to buy that technology, and a strong flash storage development team. The other large storage vendors will probably make similar purchases to catch up.

Rather than combine Isilon and VNX somehow, EMC acquired XtremIO. XtremIO offers scale-out, great data management and great performance.  In fact, their subsystem was built specifically for flash, whereas flash was an afterthought for NetApp (they still leverage an HDD-optimized subsystem).

Industry Impact

It is clear Flash is going to become even more imperative for the big storage players and getting in first with XtremIO might pay off for EMC and become the deal of the year.

With pressure mounting on other big-players to catch-up with EMC, there are other similar companies like XtremIO that may be the next target for possible acquisitions. Fusion-IO, Violin Memory, Virident or Kaminario could be possible acquisition targets that other players might be looking at.

EMC Project X

At VMworld 2012, EMC showed an early version of the all-flash array based on XtremIO technology.  Project X, as the array is known for now, has been revealed to have dual Intel-based controllers in each X-brick scaling unit along with a shelf of flash drives, 2 host adaptors with 2 ports each (supporting FC and iSCSI), and Infiniband connecting the modules together in a scale-out manner.

The demo claimed 2600% dedupe rates.  The dedupe is global, inline, always on and is said to extend SSD lifespans by reducing the rate of writes to each drive.  The array delivers a predictable sub-millisecond I/O response time for every 4K block no matter what you happen to be doing: read, write, sequential, random, snaps, etc.  The formerly big number of a million IOPS can result from a very modest configuration of XtremIO modules.

The price of the new machines was not disclosed or even discussed, but a likely release date of somewhere in the first half of 2013 remains on EMC’s agenda.

References

  • The register, “EMC shows off XtremIO’s Project X box”
  • VentureBeat.com, “EMC’s buy of XtremIO for $400M could spur M&A rush in flash storage”
  • VentureBeat.com, “Flash storage mania — EMC buys XtremIO, eyes turn toward Violin”
  • Gigaom, “If EMC buys XtremIO, the flash war is on”
  • EMC, “VMware view solution guide”
  • Computer Weekly, “XtremIO: Costly mistake or genius deal for EMC?”
  • Chuck’s Blog, “When Flash Changed Storage: XtremIO Preview”

Spotlight – Akamai: Pioneer in CDN

Akamai was recently in the news for acquisitions of Blaze Software Inc. as well as Cotendo Inc. Akamai has done it again with yesterday’s acquisition of FastSoft Inc., a provider of content acceleration software.  The acquisition is expected to enhance Akamai’s cloud infrastructure solutions with technology for optimizing the throughput of video and other digital content across IP networks.

This article talks about Content Delivery Networks in general, Akamai, and its recent acquisitions.

Overview of Content Delivery Network (CDN)

Today the internet has about 77 TBps of global capacity.  As the internet grows bigger the number of Internet Exchange Points (IXP) across the world has increased from 50 in 2000 to over 350 in 2012. Today, when a person requests a video stream, or an internet download, the data is sent through a content delivery network, and it doesn’t need to travel as far as would be the case if the data was sent directly from the source server to the user.  As a result, the user gets better quality of service, and server load is also reduced as the data is cached over the content delivery network. Over 45 per cent of web traffic today is delivered over CDNs.

Conceptually, a delivery network is a virtual network built as a software layer over the actual internet, deployed on widely distributed hardware, and tailored to meet the specific system requirements of distributed applications and services.  A delivery network provides enhanced reliability, performance, scalability and security that is not achievable by directly utilizing the underlying Internet.  A CDN, in the traditional sense of delivering static Web content, is one type of delivery network.  Today CDN encompasses dynamic content as well.

Overview of Akamai

Akamai, launched in early 1999, is the pioneer in Content Delivery Networks.  The company evolved out of an MIT research effort to solve the flash crowd problem.  It provided CDN solutions to help businesses overcome content delivery hurdles.  Since then, both the Web and the Akamai platform have evolved tremendously.  In the early years, Akamai delivered only Web objects (images and documents).  It has since evolved to distribute dynamically generated pages and even applications to the network’s edge, providing customers with on-demand bandwidth and computing capacity.

Today, Akamai delivers 15-20% of all Web traffic worldwide and provides a broad range of commercial services beyond content delivery, including Web and IP application acceleration, EdgeComputing™, delivery of live and on-demand high-definition (HD) media, high-availability storage, analytics, and authoritative DNS services.  Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications.

Akamai Acquisitions

The following list shows some of the Akamai acquisitions over its history.

  • June 2005, Akamai acquired Speedera Networks valued at $130 million.
  • November 2006, Akamai acquired Nine Systems Corporation valued at $164 million.
  • March 2007, Akamai acquired Netli valued at $154 million.
  • April 2007, Akamai acquired Red Swoosh valued at $15 million.
  • November 2008, Akamai acquired aCerno valued at $90.8 million
  • June 2010, Akamai acquired Velocitude LLC valued at $12 million.
  • February, 2012, Akamai acquired Blaze Software Inc. a provider of front-end      optimization (FEO) technology.
  • March 2012, Akamai acquired Cotendo, valued at $268 million, that offers an      integrated suite of web and mobile acceleration services.
  • September 13, 2012, Akamai acquired FastSoft Inc., provider of content acceleration      software.

The latest acquisition of Akamai is FastSoft Inc. which was launched in 2006 to commercialize network optimization technology.  FastSoft’s patented FastTCP algorithms improve Transmission Control Protocol (TCP), adding intelligence designed to increase the speed of dynamic page views and file transfer downloads while reducing the effects of network latency and packet loss.  FastSoft’s unique technology has helped improve website and web application performance across the first and last miles, as well as through the cloud, without requiring client software or browser plug-ins.  Combining FastSoft with Akamai’s existing network protocols is expected to help enable Akamai to optimize server capacity, deliver higher throughput for video, and bring greater efficiency to its global platform.

If we focus on the 2012 acquisitions of Blaze, Cotendo and now FastSoft, they are indicative of a trend towards providing end-to-end acceleration for an entire leg of the transaction.  With the current proliferation of mobile devices and users accessing internet over mobile devices, Akamai is also targeting various performance improvements and network services to deliver content to these users with lower latency and better security than has previously been available.

Compuware’s Gomez platform is well-known technology to measure the performance of Web applications.  According to Gomez benchmarks, it takes a mobile Web site from 7.7 to 8 seconds to open, versus 2 seconds on a desktop computer, says Pedro Santos, VP of the Mobile Business at Akamai.

“So there is a tremendous opportunity to improve the performance of mobile web sites and applications,” he says, citing user surveys that 71% of consumers expect Web sites to open on a mobile phone as quickly as they do on a desktop computer, and that 77% of organizations today have mobile web pages that take longer than 5 seconds on average to open.

Akamai’s new products like Terra Alta and Aqua Mobile Accelerator also substantiate this trend.

It will be interesting to study the strategic response from Akamai’s competitors like Limelight networks.

References

  • Network computing, “Akamai Boosts Web, Mobile App Performance”
  • http://www.prnewswire.com, “Akamai Acquires FastSoft”
  • Gigaom, “The shape of Internet has changed. It now lives life on the edge”

Low Energy Bluetooth – Possibilities for connected point of care diagnostic devices

The information sharing or more explicitly the connectivity between devices/systems for data sharing is becoming a key aspect of personal healthcare or clinical health care solutions. While wired systems have been used, they definitely have user adaptability issues: imagine a patient carrying a bunch of wires for a personal health care monitoring system or a patient lying on the operation table surrounded by wires. Despite regulatory constraints, wireless technology is quickly proliferating as a preferred communication means for applications in the wider area (e.g., remote monitoring of patients) as well as short area (e.g., Patient monitors) needs.

Companies can opt for a proprietary RF wireless solution or a standard wireless solution like Wi-Fi, Bluetooth, Zigbee etc. A proprietary solution helps in better control and can cater to specific needs whereas the use of standard technology helps reduce development and testing effort and better manage regulatory expectations.

The low energy Bluetooth standard is designed as a low cost solution with focus on low power consumption, and it is targeted at applications for data collection from sensor based device networks. It works on the concept of small data transmission on an event (e.g.,  periodic capture of vital data to a central hub), which results in low power consumption compared to regular Bluetooth (with continuous data streaming).  The event based system wakeup makes it ideal for sensor based device networks. Its enhanced range is over 100 meters with connection setup and data transfer latency as low as 3ms. It also supports Full AES-128 encryption using CCM. The low energy Bluetooth uses the adaptive frequency hopping principle to minimize interference from other technologies like Wi-Fi in the 2.4 GHz band. The best part is that in the dual mode a device can work on both the classical Bluetooth and the low energy Bluetooth protocol depending on the master device where as in single mode it will work on the low energy Bluetooth protocol only.

The standard Bluetooth health device profile focuses on the patient monitoring and personal health care devices both in home and clinic environment. Personal health care is one of the best-identified business cases for low energy Bluetooth and with initiatives from Continua Health Alliance, there is already the required traction for integration between third party devices. However, there is a great opportunity (serial port profile or other standard profiles) to use this technology in other areas of device connectivity as well.  Some examples could be:

  • Low Energy Bluetooth enabled Patient bedside vital monitoring devices, which send vitals data to a central device. The central hub could further connect to a central server on LAN or WLAN.
  • Continuous diabetic monitoring devices, which send the data to a Smartphone application. The data could be uploaded from a Smartphone to a central server for data management or other usage.
  • After surgery or rehabilitation monitoring devices
  • Smart wireless diagnostic catheters with a smart node that sends the data to the central data collection device over low energy Bluetooth

References

http://www.bluetooth.com/Pages/low-energy.aspx

Challenging World of Software Product Line

Software Product Line (SPL) is not a new concept and has existed for the past several years. It has been used across many industries varying from avionics to medical, automotive to consumer electronics and telecom to storage. Industry big names like Boeing, Philips, Toshiba, Nokia, Ericson AXE, and GM etc. have applied this concept successfully. However, it has still not become The Thing in the industry.  Some of the challenges that inhibit this are:

  • Managing variability across products
  • Investment required in building the SPL based infrastructure
  • People resistance and the skills required

The variation across the product is easier to manage if enough time is invested in identifying the products under the product line and core asset features are defined from the domain prospective. The process should be driven from the level of domain, product and system but not from the software level. From a broader perspective, the SPL could be approached as Platform Product Line where each components of the system that includes hardware are covered.

The fundamental cost line items are to a large extent the same as that of standard software development. However, the economics of SPL suggests that as the number of products increases the SPL based product development becomes more cost effective than the standard development approach. The initial investment is large in SPL but over the time it will have more benefits and the initial investment could be managed by using the Hybrid approach of SPL. The hybrid approach is to develop a product and SPL core assets in parallel where upper life cycle efforts are spent more to manage the SPL expectations.

Resistance to a change is a human tendency and in case of SPL although basic activities are the same as a standard software development process, a different mindset and thought process is required. The sponsorship from senior leadership might be helpful in managing the resistance to change but small steps towards the bigger objective for SPL would be more helpful. The small steps could be to start following the SDLC activities (in spirit) or if they are already followed then move toward Model Driven Development. In cases where the whole process is designed to be a skill enhancement for the individual,  the change process is smoother.

In summary, there are advantages of the SPL approach like reduced time to market, long term cost, consistent better quality, and market adaptability. However, successful SPL requires time to deploy, a flexible process framework which evolves with experience and a thoughtful approach, which varies from industries, products, domains and organizations.

 References

  1. http://www.sei.cmu.edu/productlines/
  2. http://www.splc.net

Green buses in India – Are we asking for too much?

There was a recent news release from Ashok Leyland, the flagship company of the Hinduja Group, about plans to launch hybrid Optare buses in India.  While we know of some bold transit authorities in the U.S. and Europe running buses on hybrid power train, the news still appeared as more of marketing gesture rather than a serious business announcement.  Given the credibility of the company, however, we were intrigued to look at some ground facts on the possibility of serious application of hybrid technology in our mass transportation systems. Considering India offers a meagre 1.29 buses per thousand passengers, while other well-planned countries provide vastly more — Brazil has 10.3 buses per thousand — the opportunity of making an impact is tremendous.  Is Hybrid the solution?

The hybrid transit bus evolution got a big marketing boost at the end of last year with the iconic London Redbus being prototyped on a hybrid power train (with the underlying design from Volvo).  It is a different matter that there was a kind of an anti-climax, though, because the bus had to pull over as the system was not designed for long-haul use.  In another development, China got its first indigenously developed solar powered hybrid bus – which claims to prolong the Lithium battery life by 35 percent.  In India too, a lot of hype got created with the launch of the Tata Starbus.  While the future has to be green, the economics of these buses and the peripheral systems have to allow a hybrid bus service to work from a cost perspective.  (there is no point running a few buses like we may end up doing…just doesn’t make economic sense..) Costs may become a huge barrier for its adoption beyond some pet ministerial projects.

The hybrid buses (and most of them are based on very costly technology from a few players) are at least 30% more expensive than the best buses running on Indian roads. Then there is also a choice between the type of drive train, a series-hybrid drive train and a parallel hybrid. Series hybrids are recognized as being more suitable for start-stop applications and allow flexible packaging in comparison to a parallel system where the mechanical drive shaft has to be 1.5-2m away from the rear axle. Series hybrids are very sensitive to failure, however, and if any of the electrical components fail it comes to a complete standstill – unfortunately many of the new components are susceptible to this. Then there are the batteries, they are expensive, hazardous, very bulky, add significant maintenance cost and some need to be changed as often as every four years.

In the context of adaptation of hybrid buses for Indian public transport, I believe that there are few things that need to happen before it can start making sense. Better batteries, lower initial cost, a better ecosystem and more importantly economies of scale. India can also choose to wait till the next wave of evolution in hybrids.  For example, a new technology for charging electric buses has already been introduced in Europe called Opportunity Charging.  This economical technology helps in the contactless charging of electric buses — where the driver needn’t leave the bus for recharging. We will follow the evolution closely and hopefully play a part.