Lights, Sound and Magnetism – the science behind next-generation medical technologies!

It was often hard to imagine the far-fetched applications of basic physics when topics as humble as Acoustics, Optics and Magnetism were introduced in our high school physics textbooks. And it seems enthralling now to fathom how some of these basic disciplines have been applied for the development of some of the most sophisticated medical technologies of today’s world. Out of this fascination we decided to have a look at some of them briefly –

Optical Coherence Tomography

Optical coherence tomography (OCT) is an emerging technology for performing high-resolution cross-sectional imaging. OCT is analogous to ultrasound imaging, except that it uses light instead of sound. OCT can provide cross-sectional images of tissue structure on the micron scale in situ and in real time. OCT can function as a type of optical biopsy and is a powerful imaging technology for medical diagnostics because unlike conventional histopathology which requires removal of a tissue specimen and processing for microscopic examination, OCT can provide images of tissue in situ and in real time. By using the time-delay information contained in the light waves which have been reflected from different depths inside a sample, an OCT system reconstructs a depth-profile of the sample structure. Three-dimensional images can then be created by scanning the light beam laterally across the sample surface. Lateral resolution is determined by the spot size of the light beam whereas the depth (or axial) resolution depends primarily on the optical bandwidth of the light source. For this reason, OCT systems may combine high axial resolutions with large depths of field, so their primary applications include in-vivo imaging through thick sections of biological systems, particularly in the human body. The figure below shows a comparison of OCT resolution and imaging depths to those of alternative techniques; the “pendulum” length represents imaging depth, and the “sphere” size represents resolution (image source – UWA).

Ultrasound Elastography

Elastography is based on the principle of physical elasticity which consists of applying a pressure on the examined medium and estimating the induced strain distribution by tracking the tissue motion.  It uses the visualization of the propagation of mechanical waves through the tissue to derive either a shear wave velocity or a Young’s modulus as a measure of tissues stiffness.  In practical terms, RF ultrasonic data before and after the applied compression are acquired and speckle tracking techniques, e.g., cross correlation methods, are employed in order to calculate the resulting strain. The resulting strain image is called an elastogram. The primary goal of elastography was the identification and characterization of breast lesions. To acquire an elastography image, the ultrasound technician takes a regular ultrasound image and then pushes on the tissue with the ultrasound transducer to take a compression image.  Normal tissue and benign tumors are typically elastic or soft and compress easily whereas malignant tumors do not depress at all. The image below shows a traditional ultrasound image and a corresponding real-time elastogram of an ablated lesion in an ex vivo liver. In the elastogram, blue corresponds to hard tissue and red corresponds to soft tissue. The lesion is not clearly visible in the traditional ultrasound image because the ablation process does not change the echogenicity of the tissue significantly. However, the lesion is clearly visible in the elastogram (dark blue area) because the ablation process hardens the tissue significantly. Image Source-TAMUS.

Magnetoencephalography

Magnetoencephalography (MEG) is a non-invasive technique used to measure magnetic fields generated by small intracellular electrical currents in neurons of the brain. It allows the measurement of ongoing brain activity on a millisecond-by-millisecond basis, and it shows where in the brain activity is produced. MEG measurements are conducted externally, using an extremely sensitive device called a superconducting quantum interference device (SQUID). The SQUID is a very low noise detector of magnetic fields, which converts the magnetic flux threading using a pickup coil into voltage allowing detection of weak neuromagnetic signals. Since the SQUID relies on physical phenomena found in superconductors it requires cryogenic temperatures for operation. Due to low impedance at this temperature, the SQUID device can detect and amplify magnetic fields generated by neurons a few centimeters away from the sensors. A magnetically shielded room houses the equipment, and mitigates interference. Applications of MEG include localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback.

Watch out this space as we deep dive into some of these technologies in greater detail and explore the rapidly evolving medical technology landscape!

Advertisements

Is it high time to innovate for brown-field markets?

Green-field opportunities have traditionally been the focus area for innovators. An opportunity for them to demonstrate how things can be made to work better, look better and do things that were difficult to imagine. This is especially true for installations involving heavy infrastructure. It is significantly easier to adapt new ideas, concepts and products when you are setting up a new plant or process from scratch. On the other hand, there is inherently a high inertia towards trying something new in brownfield installations.  This is because brown-field installations were originally designed for particular modes of production, with established practices and technologies, incumbent customers and competitors, supporting and specialized infrastructure, deep-rooted business relationships, and sometimes extensive government regulation. This reality has dissuaded potential “brown field” innovators, especially in the automation OEM market.

Things have changed. With the economic downturn and a paucity of green-field opportunities, industrial product OEMs are, albeit reluctantly, looking to find some opportunities in existing installations. They are not finding it easy, however, and especially for emerging markets, they are really struggling.

There are a few aspects to the challenge of innovating in brown-field markets.  First, the innovation has to fit and co-exist with the existing technical infrastructure. Sometimes the interoperability problems can be really overwhelming and overshadow the benefits.  Unless strongly supported by economics (ROI) and a strong intent, this alone can stall innovation.  Consider the case of someone trying to innovate HVAC control systems to make them energy efficient in brown-field buildings (in India).  The reality is that the engineering is so non-standard that it is impossible to think of a one-for-all solution.  New products and processes, designed for mature, established markets, must be gauged in terms of their overall potential in order to fit within the complementary systems that make up the rest of the infrastructure.

Second, Economics plays a even more significant role in brown field innovation. Benefits are incremental in most cases and ROI terms tend to be longer. There are efficiencies built over a long period of time in running plants in a certain way – which is tuned to optimum.  The benefits of having a trained work force and existing physical assets often used well beyond their amortization—thus providing incumbent competitors with extremely favourable economic terms.  Anything changing this optimized environment must provide especailly compelling advantages.

Third, and perhaps most signficant, there is the human factor of resistance to change and unwillingness to take risks in operational installations. Most operations managers tend to be production focussed and do everything that they can to maximize production, sometimes sacrificing long term efficiency and cost, and taking a short and mid-term view only.

Is there a reasonable method to ensure that sustainable and economically viable innovations are possible?  Can one really make a difference? Is there a business case?  We believe there is indeed a business case and product innovators have to take some realistic bets.

  • For a starter, one should be ready to get their hands dirty; it is not enough to model and design solutions sitting in air-conditioned development centres. Problems have to be understood closer to where they are happening and solutions thought of accordingly.
  • More often than not, there is not a one-size fits all solution for all problems, even the ones that look similar. One the ground, existing systems may not be standards used and there may be inter-operability issues. Engineering must be done based on the use case and for the purpose.
  • While the innovation process may replace some of the engineering systems and processes, the change must avoid disturbing the core operations. This will help in easier adoption and avoiding change related issues.
  • Brownfield innovation, more often than not, demands local presence and closer ties with an ecosystem that understands and supports the need closely.

Concept Realization Accelerated – Is Open Source Hardware for real?

A lot of us grew up programming on proprietary closed platforms and were firm believers that that was the way serious products are built. It would be an understatement to say that most of us were proved wrong about our (mis)conceptions about the power of community. Younger software developers can always claim that they always knew that Linux and open source software was the way to go. Hardware guys, at least we thought, take their skills more seriously and would not be drawn to something similar … never.  At least that’s what we thought till Massimo Banzi told us that things could be made simpler in the electronics world as well … by allowing the proliferation of low cost open source hardware platforms and enabling thousands of innovators to experiment, without having to worry about a very expensive hardware design process.

Manzi co-founded what is now very popularly known to product innovators as the Arduino project, a cheap, easy to use, open source, hardware platform.  Next time you have to do your own small control system in your lab, don’t bother designing your PCB … Arduino (and a few other ready to use open source hardware platforms) is all that you need.

Focus on the ideas … concept realization made cheap and easy.

While Arduino is without a doubt the poster child of the open source hardware movement there are others too … BeagleBone is another open-source single board computer that runs Linux. Because it’s a computer you can program your tests in any programming language you like, from C to the command line. Python, also open source, seems to be the most popular language for BeagleBone. Then there is the Raspberry Pi, which is a credit-card sized computer that plugs into your TV and a keyboard.  It’s a capable little PC which can be used for many of the things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. The developers want to see it being used by kids all over the world to learn programming.

And so on…

Most of these platforms have evolved to the extent that there are a wide variety of daughter board designs available that provide a wide array of interfacing capabilities.

While this movement might seem like one for hobbyists, there’s a larger world out there. Open source hardware enthusiasts will tell you that this will quickly prove to be a tremendous business driver enabling companies to move faster and be more agile than ever. Open Source hardware is a way of accelerating innovation.

Next time you hear something called Razdroid or the Android ADK (latter one released by Google … now that’s cool), ignore it at your own peril … this is Android on your $30 board.

Bio-inspired automation technology – limitless possibilities

Bio-inspired automation technology – limitless possibilities

The ingenuity of Nature has always intrigued scientists and has inspired generations of inventors to understand how natural processes work and to apply that learning to come up with innovative bio-inspired solutions.  We thought we would, in this blog, talk about some of the work that has been going on and the “bio” inspiration driving that research.

THE ANTS

A project developed at the MIT Artificial Intelligence Laboratory, the aim is to develop a community of cubic-inch micro-robots, which should form a structured robotic community and incorporate social behaivor in this community.  The robot is equipped with 5 different sensors including IR, food, tilt, bump and light.  The inspiration came from the natural Ant colonies.  From a technical perspective, it was developed to accommodate ‘n’ number of hardware in such a small volume (1 cu.in.).  The microprocessor was just 8-bit running at 2MHz with EEPROM, equivalent to the first IBM PC.  The important point to consider is that, it not only runs its own code, but in parallel is communicating with its environment and other robots as well, which creates complications.

SWARMs

The main goal of the project is to develop a framework and methodology for the analysis of swarming behavior from biology and the synthesis of bio-inspired swarming behavior for engineered systems.  The idea was developed by Prof. Vijay Kumar from the University of Pennsylvania, to answer some of the questions related to bio-inspired swarms: Can large numbers of autonomous vehicles, work in a group to carry out a prescribed mission with or without a leader? Can they change their roles working in some hostile environment?

The inspiration for idea was to develop the high end algorithms, control laws and hardware to perform specified tasks as well as switching decisions by communicating with other group members. For example, Control was developed based on Hydrodynamics models (where swarm is assumed as an incompressible fluid, which is again interdisciplinary), and algorithm for task allocation without communication..

DIVISION OF LABOR

Another similar project is underway at the Laboratory of Intelligent systems at EPFL, Switzerland.  The project focuses on the evolutionary dynamics of fixed and adaptive mechanisms from both an engineering and a biological perspective.  The aim is to reveal the complexities arising from multiple agent interactions, development of distributed control algorithms and comparing with the engineering explainations for the colonial traits observed in ants, bees etc.

Amid research, they developed a framework based on artificial neural networks for modeling task allocation which includes workers’ behavioral flexibility to stimulus and colony response to varying stimuli. The bio-inspired models create a new perspective to the task completion in teams by agent swapping.  Similarly, in other domains of team work, bio-inspired theories help in the invention of new models.

The above examples are presented to emphasize the fact that there are lots of theories still untouched in nature which can be applied to any domain.

There are numerous examples in nature where things have structural flexibility like leaves, skin etc. Along similar lines, researchers have developed soft actuators/sensors which are fabricated on a flexible sheet at the Wyss Laboratory in Harvard.  One of the uses of actuator/sensor is to calculate the stretching of material on which it is mounted.  Another example is in the improvement in communication protocols for communication between machines working collectively.  New mathematical models are developed for spatial relationship to indentify the relative position in the dynamic systems. Similarly, a new class of algorithms, inspired by swarm intelligence, is currently being developed that can potentially solve numerous problems in communication networks like increasing network size, rapidly changing topology, and complexity.

As mentioned earlier, there is no end to learning from nature … and the quest will go on.

Machine automation – revisiting the history

Most of the earliest automatic systems were developed in Greece, and one of them that survived is the “Antikythera mechanism” (150-100 BC) which was designed to calculate the positions of astronomical objects.   The next level of breakthrough was the first programmable machine, developed in 60 AD, by a Greek engineer called Hero (Article).  He constructed a three-wheeled cart that performed stunts in front of an audience.  The power came from a falling weight that pulled on string wrapped round the cart’s drive axle, and this string-based control mechanism is exactly equivalent to a modern programming language (Video).

The above examples are mainly presented to demonstrate “how the advancement in mechanization techniques” happened more than 2000 years ago.  The word automation itself comes from the Greek word Automaton (acting on one’s will).  It is used to describe non-electronic moving machines, especially those resembling human or animal actions.  From the examples available in literature, earlier motivation for automation was mainly for entertainment.  It was a point of pride for a town or kingdom to have these mechanical machines.  Many examples are available on the Wikipedia pages of Automaton (Ancient Automaton).

The era of automatons for amusement and pleasure continued till 17th – 18th century, when the need for automation during the “Industrial Revolution” or “Machine Age” was noticed.  With the invention of new energy technologies (steam engines, spinning jenny, water frame etc.), better systems/mechanism were developed especially around industries like weaving, milling, power generation etc. During the Industrial Revolution, the mechanization of systems reduced manual intervention and increased productivity. Hence, the main motivation of automation (apart from fun) was the development of productive systems in the vicinity of the similar technology. (Link)

Once the invention of the transistor occurred in the 1950s, numerical control of machines was possible and automation really began to fly.  The systems were transformed from open loop to closed loop with help of electronics.  Even systems started developing around electronics because it makes processes faster (eventually productive).  Quite surprisingly, the focus on mechanization shifted to electronics.  Purely Mechanical Systems were transformed to Electro-Mechanical systems, development of communication systems, and development of software domain started, and ultimately the invention of web services.  Because of development of such complex systems, the motivation for automation has branched itself into three parts, namely:

  • Automation for making process efficient – Examples from automated factories FANUC), where the light-out manufacturing started from 2001, robots are manufacturing other robots and the factory can run unsupervised for as long as 30 days.
  • Automation for making systems secure – Ballot box and ATM are the best examples where automation has helped the technology to work with security.
  • Automation for solving complex problems – There was a time when flight was very difficult, but with advanced automation practically complex, “Automated guided flight vehicles (unmanned)” are possible.

In recent years and in the near future, with the exponential development in Artificial Intelligence, complex algorithms, approaches to solve NP hard (nondeterministic polynomial time) problems, etc., automation has evolved into different levels.  There are also improvements in fabrication of micro-electronics components and along with flexible electronics technology, machines and automation can go to the miniature level.  In the years to come, the motivation for automation will be to make self dependent systems, self learning systems, systems which can work in teams and compact systems. Some of them are already developed like ASIMOs in research labs (Video). Upcoming areas for development (from practical perspective), are the robots/machines like equivalent to insects, animals, humans working in all terrains/environments, communicating with their similar machines.

Trends in Storage – Transition to All-Flash & beyond

The use of flash in storage devices is not new; however, we are now seeing the increased use of flash compared to disk storage.  Almost all storage companies now provide hybrid solutions that have a mix of SSDs and HDDs in their boxes.  Those that don’t are completely switching over to leverage the advantages of SSDs.  As the costs of SSDs plummet further, we will see SSD being used more aggressively in storage boxes.  Companies like Avere, Marvell, Starboard are providing unique offerings with SSD supported devices. Soon, companies like XtremIO (recently acquired by EMC) with all-flash products will enter the fray.  Looking forward, there are some new memory technologies that could potentially replace flash in years to come.

Flash Technology

NAND flash technology is only a decade old. However, it has already gained significant traction due to its mechanical characteristics and performance. SSDs with NAND flash have a number of advantages over HDD devices. Some of them are:

  • Power savings of 2x
  • No noise
  • No vibration(since there are no moving parts)
  • Very little heat
  • About 30% faster than HDDs
  • Magnetic field safe

SSD costs, although reducing, are still higher than HDD costs. This is the only factor that is preventing a complete replacement of HDDs in storage products. See this article in Storage Review for a detailed comparison between SDD and HDD drives.

SSD offerings

Storage companies are already offering several solutions built around SSD drives in their storage servers and boxes. There are also ways in which SSDs can be utilized in the storage environment in a transitional manner while improving value proposition for customers.

Major vendors like EMC Corp. and NetApp Inc. have placed flash memory in their storage arrays and designed controller software to use the flash memory as a cache. EMC Fast (Fully Automated Storage Tiering) Cache improves the performance of existing SATA drives/FC and SAS drivers as well. NetApp on the other hand uses FlashCache to improve performance. This also compensates for the performance penalty due to their de-duplication technology (designed for capacity optimization). See this article by Joerg Hallbauer for a nice comparison between these technologies.

Avere Systems and Marvell, take a different standpoint. Avere’s FXT caching appliance sits between NAS arrays and clients.  Ron Bianchini, founder and CEO of Avere Systems claims that the appliance delivers 50 times lower access latency then existing NAS devices. Marvell’s Dragonfly VSA is designed to be placed inside the server. It uses NVRAM and SSD caches for random write handling.

Storage vendors are also transforming their fixed RAID systems to automatically tiered storage devices.  EMC’s FAST Virtual Pool is an example of a device in this category.  It places only data that requires high speed access to SSD drives while data that is only moderately used is placed on SAS drives.  Starboard Storage in its AC72 system also utilizes SSDs and HDDs with automated tiering. Data that is less frequently used is targeted towards HDDs.

By moving “hot” data to faster storage devices, tiered storage systems can perform faster than similar devices without the expense of widely deploying these faster devices.  Conversely, automated tiering can be more energy- and space-efficient because it moves “bulk” data to slower but larger-capacity drives.

Storage vendors are also coming up with “All Flash” products – despite the costs involved—to cater to customers that demand speed.  EMC announced “Project X” recently that utilises XtremIO technology to provide an all flash storage box that is fast, and uses in-line de-dup technology.

Future Memory Technologies

Even while we are considering the current industry trend towards flash SSD based devices, there are future technologies that could disrupt flash. Potential successor technologies to flash include Resistive RAM (RRAM), Magnetoresistive RAM (MRAM) and Phase-change memory (PCM).  But, more about these memory types in a different article.

Usage of flash in storage devices is not new; however, we are witnessing increasing usage of flash as compared to disk storage as a trend. Almost all companies now are providing hybrid solutions that have a mix of SSDs and HDDs in their boxes. Those that don’t are transitioning over to leverage the advantages of SSDs. As costs of SSDs plummet further, we will see SSD being used more aggressively in storage boxes. This short article talks about the current state of products in this space and how companies like Avere, Marvell, Starboard are providing unique offerings with SSD supported devices. Very soon we will also have companies provide all-flash products as is evidenced by acquisition of XtremIO by EMC. We briefly also touch upon which memory technologies can potentially replace flash in years to come.

Flash Technology

NAND flash technology is only a decade old. However, it has already gained significant focus due to its mechanical characteristics and performance. SSDs with NAND flash have a number of advantages over HDD devices. Some of them being:

  • Factor x2 Power savings
  • No noise devices
  • No vibration devices (since there are no moving parts)
  • Very little heat produced
  • About 30% faster than HDDs
  • Magnetic field safe

However, currently SSD costs, although reducing, are still higher than HDD costs. This is the only factor that is preventing a complete replacement of HDDs in storage products. See article on Storage Review for a detailed comparison between SDD and HDD drives.

SSD offerings

Storage companies are already offering several solutions around SSD drives in their storage servers and boxes. There are ways in which SSDs can be utilized in storage environment in a transitional manner while improving value proposition for customers.

Major vendors like EMC Corp. and NetApp Inc. have placed flash memory in their storage arrays and designed controller software to use it as a cache. EMC Fast (Fully Automated Storage Tiering) Cache improves the performance of existing SATA drives/FC and SAS drivers as well. NetApp on the other hand uses FlashCache to improve performance. This also compensates the performance penalty caused due to their de-duplication technology which is designed for performing capacity optimization. See article by Joerg Hallbauer for a nice comparison between these technologies.

Avere Systems and Marvell Technology Group Ltd, take a different standpoint. Avere’s FXT caching appliance sits between NAS arrays and clients. Ron Bianchini, founder and CEO of Avere Systems claims that the appliance delivers 50 times lower access latency using customer’s existing NAS devices. Marvell’s Dragonfly VSA is designed for placement inside the server itself. It uses NVRAM and SSD caches for random write handling.

Storage vendors are also transforming their fixed RAID systems to automatically tiered storage devices.

EMC’s FAST Virtual Pool is an example of a device in this category. It places only data that requires high speed access to SSD drives while data that is only moderately used is placed on SAS drives.

Starboard Storage in its AC72 system also utilizes SSDs and HDDs with automated tiering. Data that is less frequently used is targeted towards HDDs.

By moving “hot” data to faster storage devices, tiered storage systems can perform faster than similar devices without the expense of widely deploying these faster devices. Conversely, automated tiering can be more energy- and space-efficient because it moves “bulk” data to slower but larger-capacity drives.

Storage vendors are also coming up with “All Flash” products despite the costs involved to cater to customers that demand speed. EMC announced “Project X” recently that utilises XtremIO technology to provide all flash storage box that is fast, and uses in-line de-dup technology.

Future Memory Technologies

Even while we are considering the current industry trends towards flash SSD based devices, there are future technologies that can disrupt this current trend towards flash. Potential successor technologies to flash include Resistive RAM (RRAM), Magnetoresistive RAM (MRAM) and Phase-change memory (PCM). But, more about these memory types in a different article.

VM Portability – OVFTool

VM Portability has become of even more importance after the evolution of virtualization in cloud computing.  It doesn’t only include pushing around virtual images but also various configurations of application, data, identity, security, or networking.  Even if all the components were themselves virtualized, simply porting the virtual instances from one location to another is not enough to assure interoperability.  This is because the components must be able to collaborate and this requires connectivity and other configuration information.

One of the solutions to this includes the VMWARE’s OVF/OVA formats. OVF is claimed to enable efficient, flexible, and secure distribution of enterprise software, facilitating the mobility of virtual machines and platform independence (Xen, KVM, Microsoft, and VMware etc).

What follows is an attempt to use it for VMWARE. Generation of OVF/OVA from a virtual machine using vShpere client turned out to be a simple process, which included selecting the VM in vSphere and clicking “Export to OVF” from main menu.

However, this is a manual process, and doesn’t help if we need to do it repeatedly for different builds of software or a large number of VMs.  To automate this process, we need some kind of CLIs to be able to call them in a script. VMWARE’s OVFtool provides us with that capability (and much more, which we will discuss in future posts).  So I tried to automate this through shell and perl scripts.  Both the scripts install a new RPM to a CentOS VM and then export it to OVF format.

SHELL Version:
#!/bin/sh
#
# Description: To install an RPM to the CentOS VM (ip=$1) & Export it
# to OVF format
#
if test $# -ne 3
then
echo “Usage – $0 vmIP rpmPath esxIP”
else
fileName = $(basename “$2”)
if ssh $1 “scp 192.168.112.132:$2 .;rpm -Uvh $fileName;exit;”
then
ovftool “vi://root@$3/CentOS” “/home/CentOS.ovf”
else
echo “Unable to connect to $1”
fi
fi

PERL Version:
#!/usr/bin/perl -w

use Net::OpenSSH;

if ($#ARGV != 2 ) {
print “usage: perl installRPM_ExportVM2OVF.pl vmIP rpmPath esxIPn”;
exit;
}
$vmIP=$ARGV[0];
$rpmPath=$ARGV[1];
$esxIP=$ARGV[2];

my $sshCentOSVM = Net::OpenSSH->new($vmIP, user => ‘root’, password => ‘PASSWORD’);

$sshCentOSVM->error and die “Unable to connect to remote host: ” . $sshCentOSVM->error;

my @values = split(‘/’, $rpmPath);
my $fileName = $values[$#values];

$sshCentOSVM->system(“scp 192.168.112.132:$rpmPath .;rpm -Uvh $fileName;”);

system(“ovftool vi://root@” . $esxIP . “/CentOS /home/CentOS.ovf”)

Upcoming posts will include the usage for other hypervisor platforms.