google search engine

Google
 

The Effect of Prior Heavy Exercise On Progressive Exercise To Fatigue [MEDICINE]

At the onset of constant-load exercise in healthy humans, the rate of muscle metabolism increases to a steady state. However, when exercise is preceded by a bout of high-intensity or 'heavy exercise' several minutes prior, steady state is typically achieved more quickly. The aim of the present study was to study the metabolic effects of prior heavy exercise on a progressive exercise protocol, in which steady state is not achieved.


Scientific Abstract:

At the onset of constant-load exercise, the rate of muscle metabolism increases to a steady state. When exercise is preceded by a bout of heavy intensity (HVY) exercise, steady state is typically achieved more quickly. The effects of a prior HVY bout on a progressive exercise protocol, in which a steady state is not achieved, are unknown. The present study, therefore, was designed to study the effects of a prior HVY bout on a progressive plantar flexion exercise protocol in healthy humans. Subjects (n=5 males, age 25 ± 4 years) completed two randomized exercise protocols that included a progressive exercise protocol (RAMP) and a progressive exercise protocol preceded 6 min prior by 6 min of HVY (HVY-RAMP). Exercise involved repeated isotonic plantar flexion (0.5 Hz, ~ 40° ROM) against an incremental resistance (~1.0 W/min) and was performed inside a 3.0 Tesla magnet for data collection using 31P-MRS (15s time acquisition per spectrum). All subjects had previously completed at least one familiarization RAMP trial, from which the steady state workload in HVY was calculated as the power output halfway between the onset of intracellular acidification and volitional fatigue. Our findings were, compared to the RAMP protocol, HVY-RAMP resulted in an increased (p <>

HVY exercise, the onset of the rapid fall in pHi is delayed, but once initiated, occurs at the same rate as in the RAMP protocol. This delay may be the result of an increased contribution of xidative metabolism suggested by the delayed onset of rapid increases in [Pi]/[PCr].


Ion Chamber Dosimetry: Accuracy in Small X-Ray Field Measurements [MEDICINE]

Modern radiation therapy techniques, such as stereotactic radiotherapy and tomotherapy, deliver the total treatment dose to the patient using many small fields. This helps to construct a dose distribution which delivers a high dose to the tumour, while minimizing the dose to normal tissue. When considering a single small field, the dose resulting from that small field is usually reported as a relative dose factor (RDF). The RDF is the percentage of dose in a small field as compared to a large reference field. Task group report #51 from the American Association of Physicists in Medicine (AAPM) provides the protocol according to which absolute dose measurements using an ion chamber must be taken, all with respect to measurements at a reference point in a large reference field. In the case of RDF measurements several assumptions are made which allows for the simplification of the TG-51 protocol, but attention is not necessarily paid to the fact that the conditions in a large reference field differ considerably from those ina small field. In this study we have used experimental measurements with plane-parallel ion chambers and theoretical simulations (Monte Carlo BEAMnrc code) to investigate the validity of the assumptions made in the determination of RDFs for small fields. Four custom-made plane parallel ion chambers, as well as other common high resolution radiation dosimeters, were used to measure the dose response in circular 6MV stereotactic radiotherapy fields. BEAMnrc was then used to simulate the experimental conditions and determine the validity of the assumptions mentioned previously. The assumptions were determined to be invalid for use in small radiation fields. Corrections

were calculated from Monte Carlo results and applied to the experimental measurements. Care needs to be taken when applying existing protocols to non-reference conditions.


Longitudinal 31P Magnetic Resonance Spectroscopy Study of Schizophrenia [MEDICINE]


Schizophrenia affects one in a hundred Canadians at some point in their life. This disorder is characterized by auditory hallucinations, delusions, lack of motivation and thought disorder. Antipsychotic drugs improve some of these symptoms, but there is no cure and the cause of schizophrenia is still unknown. Current models suggest that a brain lesion in-utero, in conjunction with neurodegeneration, may result in the symptoms of the disease. These processes may result in abnormal brain function, manifested as changes in brain metabolism.


In our past studies of first-episode and chronic schizophrenia, we used 31P MRS, a noninvasive technique, to measure the changes in brain metabolism. These studies revealed significant changes between patients and normals in several regions, including the thalamus, and the anterior cingulate. In order to determine the location of significant changes in metabolites as the disease progresses, more measurements between the initial and chronic stages of the disease are required.


In this longitudinal study of schizophrenia, the same subjects undergo 2 separate scans, the first at disease onset and the second at 30 months. Presently 24 schizophrenic patients and 15 matched controls have undergone the initial scan at disease onset and 6 schizophrenic patients and 6 controls have undergone the 30 month scan. Once complete, this study may reveal locations where significant membrane abnormalities occur in the early progression of the disease. This information will help to confirm that degeneration continues after schizophrenia is diagnosed clinically and may also help to target drug treatments to prevent this degeneration.

Babbitt metal [MECH]

Babbitt metal, also called white metal, is an alloy used to provide the bearing surface in a plain bearing. It was invented in 1839 by Isaac Babbitt in Taunton, Massachusetts, USA. The term is used today to describe a series of alloys used as a bearing metal. Babbit metal is characterized by its resistance to gall.

Common compositions for Babbitt alloys:

  • 90% tin 10% copper
  • 89% tin 7% antimony 4% copper
  • 80% lead 15% antimony 5% tin

Originally used as a cast in place bulk bearing material, it is now more commonly used as a thin surface layer in a complex, multi metal structure.

Babbitt metal is soft and easily damaged, and seems at first sight an unlikely candidate for a bearing surface, but this appearance is deceptive. The structure of the alloy is made up of small hard crystals dispersed in a matrix of softer alloy. As the bearing wears the harder crystal is exposed, with the matrix eroding somewhat to provide a path for the lubricant between the high spots that provide the actual bearing surface.


BlueTec [mech]

BlueTec is DaimlerChrysler's name for its two nitrogen oxide (NOx) reducing systems, for use in their Diesel automobile engines. One is a urea catalyst called AdBlue, the other is called DeNOx and uses an oxidising catalytic converter and particular filter combined with other NOx introduced the systems in the reducing systems. Both systems were designed to slash emissions further than ever before. Mercedes-BenzE-Class (using the 'DeNOx' system) and GL-Class (using 'AdBlue') at the 2006 North American International Auto Show as the E 320 and GL 320 Bluetec. This system makes these vehicles 45-state and 50-state legal respectively in the United States, and is expected to meet all emissions regulations through 2009. It also makes DaimlerChrysler the only car manufacturer in the the US committed to selling diesel models in the 2007 model year.


CVCC [mech]

CVCC is a trademark by the Honda Motor Company for a device used to reduce automotive emissions called Compound Vortex Controlled Combustion. This technology allowed Honda's cars to meet the 1970s US Emission requirements without a catalytic converter, and first appeared on the 1975 ED1 engine. It is a form of stratified charge engine.

Honda CVCC engines have normal inlet and exhaust valves, plus a small auxiliary inlet valve which provides a relatively rich air / fuel mixture to a volume near the spark plug. The remaining air / fuel charge, drawn into the cylinder through the main inlet valve is leaner than normal. The volume near the spark plug is contained by a small perforated metal plate. Upon ignition flame fronts emerge from the perforations and ignite the remainder of the air / fuel charge. The remaining engine cycle is as per a standard four stroke engine.

This combination of a rich mixture near the spark plug, and a lean mixture in the cylinder allowed stable running, yet complete combustion of fuel, thus reducing CO (carbon monoxide) and hydrocarbon emissions.



stratified charge engine [mech]

The stratified charge engine is a type of internal-combustion engine, similar in some ways to the Diesel cycle, but running on normal gasoline. The name refers to the layering of fuel/air mixture, the charge inside the cylinder.

In a traditional Otto cycle engine the fuel and air are mixed outside the cylinder and are drawn into it during the intake stroke. The air/fuel ratio is kept very close to stoichiometric, which is defined as the exact amount of air necessary for a complete combustion of the fuel. This mixture is easily ignited and burns smoothly.

The problem with this design is that after the combustion process is complete, the resulting exhaust stream contains a considerable amount of free single atoms of oxygen and nitrogen, the result of the heat of combustion splitting the O2 and N2 molecules in the air. These will readily react with each other to create NOx, a pollutant. A catalytic converter in the exhaust system re-combines the NOx back into O2 and N2 in modern vehicles.

A Diesel engine, on the other hand, injects the fuel into the cylinder directly. This has the advantage of avoiding premature spontaneous combustion—a problem known as detonation or ping that plagues Otto cycle engines—and allows the Diesel to run at much higher compression ratios. This leads to a more fuel-efficient engine. That is why they are commonly found in applications where they are being run for long periods of time, such as in trucks.


E85 [mech]

E85 is an alcohol fuel mixture of 85% ethanol and 15% gasoline, by volume. ethanol derived from crops (bioethanol) is a biofuel.

E85 as a fuel is widely used in Sweden and is becoming increasingly common in the United States, mainly in the Midwest where corn is a major crop and is the primary source material for ethanol fuel production.

E85 is usually used in engines modified to accept higher concentrations of ethanol. Such flexible-fuel engines are designed to run on any mixture of gasoline or ethanol with up to 85% ethanol by volume. The primary differences from non-FFVs is the elimination of bare magnesium, aluminum, and rubber parts in the fuel system, the use of fuel pumps capable of operating with electrically-conductive (ethanol) instead of non-conducting dielectric (gasoline) fuel, specially-coated wear-resistant engine parts, fuel injection control systems having a wider range of pulse widths (for injecting approximately 30% more fuel), the selection of stainless steel fuel lines (sometimes lined with plastic), the selection of stainless steel fuel tanks in place of terne fuel tanks, and, in some cases, the use of acid-neutralizing motor oil. For vehicles with fuel-tank mounted fuel pumps, additional differences to prevent arcing, as well as flame arrestors positioned in the tank's fill pipe, are also sometimes used.


FADEC - Full Authority Digital Engine Control.[mech]

FADEC is the acronym for Full Authority Digital Engine Control. It is a system consisting of a digital computer (called EEC /Electronic Engine Control/ or ECU /Electronic Control Unit/) and its related accessories which control all aspects of aircraft engine performance. FADECs have been produced for both piston engines and jet engines, their primary difference due to the different ways of controlling the engines.

Electronics' superior accuracy led to early generation analogue electronic control first used in Concorde's Rolls-Royce Olympus 593 in the 1960s. Later in the 1970s NASA and Pratt and Whitney experimented with the first experimental FADEC, first flown on an F-111 fitted with a highly modified Pratt & Whitney TF-30 left engine. The experiments led to Pratt & Whitney F100 and Pratt & Whitney PW2000 being the first military and civil engines respectively fitted with FADEC and later the Pratt & Whitney PW4000 as the first commercial 'Dual FADEC' engine.

The aircraft's thrust lever sends electrical signals (pilot's command, may also be the autothrottle) to the FADEC. The FADEC digitally calculates and precisely controls the fuel flow rate to the engines giving precise thrust. In addition to the fuel metering function, the FADEC performs numerous other control and monitoring functions such as Variable Stator Vanes (VSV's) and Variable Bleed Valves (VBV's) control, cabin bleeds and power off-takes control, control of starting and re-starting, turbine blade and vane cooling and blade tip clearance control, thrust reversers control, engine health monitoring, oil debris monitoring and vibration monitoring. The inputs come from various aircraft and engine sensors. Apart from the key parameters that are monitored for a safe thrust control (shaft rotational speeds, pressures and temperatures at various points along the gas path) the FADEC also monitors hundreds of various analog, digital and discrete data coming from the engine subsystems and related aircraft systems, providing a fully redundant and fault tolerant engine control.


Hybrid Synergy Drive (HSD)

Hybrid Synergy Drive (HSD) is a set of hybrid car technologies developed by Toyota and used in that company's Prius, Highlander Hybrid, Camry Hybrid, Lexus RX 400h, and Lexus GS 450h automobiles. It combines the characteristics of an electric drive and a continuously variable transmission, using electricity and transistors in place of toothed gears. The Synergy Drive is a drive-by-wire system with no direct mechanical connection between the engine and the engine controls: both the gas pedal and the gearshift lever in an HSD car merely send electrical signals to a control computer.

HSD is a refinement of the original Toyota Hybrid System (THS) used in the 1997–2003 Toyota Prius. As such it is occasionally referred to as THS II. The name was changed in anticipation of its use in vehicles outside the Toyota brand (Lexus).

When required to classify the transmission type of an HSD vehicle (such as in standard specification lists or for regulatory purposes), Toyota describes HSD-equipped vehicles as having E-CVT (Electronically-controlled Continuously Variable Transmission).


regenerative brake

A regenerative brake is an apparatus, a device or system which allows a vehicle to recapture part of the kinetic energy that would otherwise be lost to heat when braking and make use of that power either by storing it for future use or feeding it back into a power system for other vehicles to use.

It is similar to an electromagnetic brake, which generates heat instead of electricity and is unable to completely stop a rotor.

Regenerative brakes are a form of dynamo generator, originally discovered in 1832 by Hippolyte Pixii. The dynamo's rotor slows as the kinetic energy is converted to electrical energy through electromagnetic induction. The dynamo can be used as either generator or brake by converting motion into electricity or be reversed to convert electricity into motion.

Using a dynamo as an regenerative brake was discovered co-incident with the modern electric motor. In 1873, Zénobe Gramme attached the wires from two dynamos together. When one dynamo rotor was turned as a regenerative brake, the other became an electric motor.

It is estimated that regenerative braking systems in vehicles currently reach 31.3% electric generation efficiency, with most of the remaining energy being released as heat; the actual efficiency depends on numerous factors, such as the state of charge of the battery, how many wheels are equipped to use the regenerative braking system, and whether the topology used is parallel or serial in nature.[citation needed] The system is no more efficient than conventional friction brakes, but reduces the use of contact elements like brake pads, which eventually wear out. Traditional friction-based brakes must also be provided to be used when rapid, powerful braking is required.


Enhanced Data rates for Global Evolution

The first stepping stone in migration path to third generation wireless mobile services (3G) is the General Packet Radio Services, GPRS, a packet-switched technology that delivers speeds of up to 115kbps. If GPRS is already in place, Enhanced Data rates for Global Evolution (EDGE) technology is most effective as the second stepping stone that gives a low impact migration. Only software upgrades and EDGE plug-in transceiver units are needed. The approach protects operators' investments by allowing them to reuse their existing network equipment and radio systems.

The EDGE technology will enable GSM and TDMA operators to deliver third-generation mobile multimedia services using existing network frequencies, bandwidth, carrier structure and cell planning process. By using a more efficient air-modulation technology optimized for data communications, EDGE increases end-user data rates up to 384kbit/s and potentially higher in good-quality radio environments

Vertical Cavity surface Emission Lasers

Vertical Cavity surface Emission Lasers (VCSEL) are lasers that emit light from their surface in contrast with regular ‘edge emitters’. Also they have got a vertical cavity , as the name suggests, which enables surface emission.”Vixels” , as they are commonly called ,have several superior characteristics compared to their edge emitting counter parts.


VCSEL – Semiconductor micro laser diodes which emit light perpendicular to their PN junction in a cylindrical beam vertically from the surface of a fabricated wafer and feature circular low divergence beam


Earliest reported in 1965 by Melngailis

VCSEL was first demonstrated in 1979 at Tokyo Institute of Technology

Epitaxial mirrors for GaAs/ AlGaAs VCSELs

pioneered in 1983


Distributed Bragg Reflection

If layers of alternating semiconductors are stacked periodically , each layer having a thickness

λo n , the reflections from each of the boundaries will be added in phase to produce a large reflectivity

Brag reflection condition

Periodicity of cladding layer is chosen so that

n1d1 + n2d2 = λo/2

n1 , n2 refractive indices

d1 , d2 thickness of layer

λo free space wavelength of the optical beam

Epitaxial growth of VCSEL

There are two methods

1 . Molecular beam epitaxy ( MBE )

2 . Metal Organic Vapour Phase Epitaxy (MOVPE)

Inverse Multiplexing over ATM

Local Area Networks are now being used to transport voice and video traffic together with traditional data traffic that they have already supported. And in the case of voice and video applications, not only is these is a need for more bandwidth, but there is also a need for guaranteed levels of receive because these applications are very sensitive to latency and delay.

Inverse multiplexing can be proved as technology that overcomes the bandwidth gap that exists between LAN and WAN. Inverse multiplexing is exactly the opposite of traditional multiplexing. In traditional multiplexing, multiple streams of data are combined into one single but larger data pipe, where as inverse multiplexing combines multiple circuits into single logical data pipe.

Asynchronous Transfer Mode (ATM) has compelling business as a WAN technology and is on a steep growth curve both in public carrier networks and in private organisations with requirements for networking video, voice and data traffic.

Inverse multiplexing over ATM (IMA) is a breakthrough standard that enables ‘right sizing and right pricing of enterprise access solutions for organisations with low to mid range WAN traffic requirements and offers the benefits of ATM’s quality of service and statistical bandwidth optimisation capabilities. IMA divides an aggregate stream ATM cells across multiple WAN links on a cell by cell basis and hence the name inverse-multiplexing. In combination with ATM, IMA simplifies and reduces WAN cost of ownership.

XMax

The correct title of this article is xMax. The initial letter is capitalized due to technical restrictions.

xMax developed by xG Technology is a wireless communications technology whose developers claim is low power and provides a high data rate over a distance of about 13 miles.

“A fundamental paradigm shift in the way radio signals are modulated and demodulated.”

Developed by xG Technology in Florida Rather than transmitting many RF cycles for each bit of data to be sent, xMax does it in a single RF cycle.

Power is saved not only in the transmission, but because receivers will only recognize single-cycle waveforms, power isn't wasted on un-intended RF signals


Automated eye-pattern recognition systems

Privacy of personal data is an illusion in today’s complex society. With only passwords, or Social Security Numbers as identity or security measures every one is vulnerable to invasion of privacy or break of security. Traditional means of identification are easily compromise and enyone can use this information to assume another’s identity. Sensitive personal and corporate information can be assessed and even criminal activities can be performed using another name. Eye pattern recognition system provides a barrier to and virtually eliminates fraudulent authentication and identity privacy and safety controls privileged access or authorised entry to sensitive sites, data or material. In addition to privacy protection there are myriad of applications were iris recognition technology can provide protection and security. This technology offers the potential to unlock major business opportunities by providing high confidence customer validation. Unlike other measurable human features in the face, hand, voice or finger print, the patterns in the iris do not change overtime and research show the matching accuracy of iris recognition systems is greater than that of DNA testing. Positive identifications can be made through glasses, contact lenses and most sunglasses. Automated recognition of people by the pattern of their eyes offers major advantages over conventional identification techniques. Iris recognition system also require very little co-operation from the subject, operate at a comfortable distance and are virtually impossible to deceive. Iris recognition combines research in computer vision, pattern recognition and the man-machine interface. The purpose is real-time, high confidence recognition of a persons identity by mathematical analysis of the random patterns that are visible with in the iris. Since the iris is a protected internal organ whose random texture is stable throughout life, it can serve as a ‘living password’ that one need not remember but one always carries.

Zigbee Networks

ZigBee is a published specification set of high level communication protocols designed to use small, low-power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). The relationship between IEEE 802.15.4-2003 and ZigBee is analogous to that existing between IEEE 802.11 and the Wi-Fi Alliance. The ZigBee 1.0 specifications were ratified on December 14, 2004 and are available to members of the ZigBee Alliance. An entry level membership in the ZigBee Alliance costs US$ 3500 and provides access to the specifications. For non-commercial purposes, the ZigBee specification is available to the general public at the ZigBee Alliance homepage.
The technology is designed to be simpler and cheaper than other WPANs such as Bluetooth. The most capable ZigBee node type is said to require only about 10% of the software of a typical Bluetooth or Wireless Internet node, while the simplest nodes are about 2%. However, actual code sizes are much higher, more like 50% of Bluetooth code size. ZigBee chip vendors announced 128-kilobyte devices.
As of 2005, the estimated cost of the radio for a ZigBee node is about $1.10 to the manufacturer in very high volumes. Most ZigBee solutions require an additional micro controller driving the price further up at this time. In comparison, before Bluetooth was launched (1998) it had a projected price, in high volumes, of $4-$6. The price of consumer-grade Bluetooth chips are now under $3.
ZigBee has started work on version 1.1. Version 1.1 is meant to take advantage of improvements in the 802.15.4b (still in draft) specification, most notably that of CCM* as an alternative to CCM(CTR + CBC-MAC)CCM mode. CCM* enjoys the same security proof as CCM and provides greater flexibility in the choice of Authentication and Encryption.

Zigbee Networks

ZigBee is a published specification set of high level communication protocols designed to use small, low-power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). The relationship between IEEE 802.15.4-2003 and ZigBee is analogous to that existing between IEEE 802.11 and the Wi-Fi Alliance. The ZigBee 1.0 specifications were ratified on December 14, 2004 and are available to members of the ZigBee Alliance. An entry level membership in the ZigBee Alliance costs US$ 3500 and provides access to the specifications. For non-commercial purposes, the ZigBee specification is available to the general public at the ZigBee Alliance homepage.
The technology is designed to be simpler and cheaper than other WPANs such as Bluetooth. The most capable ZigBee node type is said to require only about 10% of the software of a typical Bluetooth or Wireless Internet node, while the simplest nodes are about 2%. However, actual code sizes are much higher, more like 50% of Bluetooth code size. ZigBee chip vendors announced 128-kilobyte devices.
As of 2005, the estimated cost of the radio for a ZigBee node is about $1.10 to the manufacturer in very high volumes. Most ZigBee solutions require an additional micro controller driving the price further up at this time. In comparison, before Bluetooth was launched (1998) it had a projected price, in high volumes, of $4-$6. The price of consumer-grade Bluetooth chips are now under $3.
ZigBee has started work on version 1.1. Version 1.1 is meant to take advantage of improvements in the 802.15.4b (still in draft) specification, most notably that of CCM* as an alternative to CCM(CTR + CBC-MAC)CCM mode. CCM* enjoys the same security proof as CCM and provides greater flexibility in the choice of Authentication and Encryption.

Blue Eyes

Is it possible to create a computer which can interact with us as we interact each other? For example imagine in a fine morning you walk on to your computer room and switch on your computer, and then it tells you “Hey friend, good morning you seem to be a bad mood today. And then it opens your mail box and shows you some of the mails and tries to cheer you. It seems to be a fiction, but it will be the life lead by “BLUE EYES” in the very near future.

The basic idea behind this technology is to give the computer the human power. We all have some perceptual abilities. That is we can understand each others feelings. For example we can understand ones emotional state by analyzing his facial expression. If we add these perceptual abilities of human to computers would enable computers to work together with human beings as intimate partners. The “BLUE EYES” technology aims at creating computational machines that have perceptual and sensory ability like those of human beings.

Mesotechnology

Mesotechnology describes a budding research field which could replace nanotechnology in the future as the primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length scales as opposed to nanotechnology which is concerned only with the smallest atomic scales.

describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the behavior of objects on the microscale and up. However, the length scale in the middle ( Although the term itself is still quite new, the general concept is not. Many fields of science have traditionally focused either on single discrete elements or large statistical collections where many theories have been successfully applied. In the field of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly, psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale.

Brain-Computer Interface



Brain-Computer Interfacing is an attention-grabbing, dynamic and highly inter-corrective explore issue at the interface between medicine, psychology, neurology, therapy-engineering, man-machine interaction, machine learning and signal processing.
BRAIN-COMPUTER INTERFACE SYSTEM :
In this system, Signals from the brain are acquired by electrodes on the scalp, the cortical surface, or from within the brain and are processed to extract specific signal features that reflect the user’s intent. Features are translated into commands that operate a device (e.g., a simple word processing program, a wheelchair, or a neuroprosthesis).
These BCI systems measure specific features of brain activity and translate them into
device control signals
BCI system
  • Give answer of the simple questions rapidly
  • Manage the environment
  • Perform time-consuming word processing.
At the same time, the act of this new technology, measured in rate and accuracy, or in the complete measure, information transfer rate (i.e., bit rate), is self-effacing. Current systems can reach no more than 25 bits/min, even under finest conditions. The eventual value of this new technology will depend largely on the degree to which its information transfer rate can be increased.

Z-Wave

Z-Wave is the interoperable wireless communication standard developed by Danish company Zensys and the Z-Wave Alliance. It is designed for low-power and low-bandwidth appliances, such as home automation and sensor networks
Radio specifications
Bandwidth: 9,600 bit/s or 40,000 bit/s, fully interoperable
Radio specifics
In Europe, the 868 MHz band has a 1% duty cycle limitation, meaning that a Z-wave unit can only transmit 1% of the time. This limitation is not present in the US 908 MHz band, but US legislation imposes a 1 mW transmission power limit (as opposed to 25 mW in Europe). Z-wave units can be in power-save mode and only be active 0.1% of the time, thus reducing power consumption dramatically.

Topology and routing
Z-wave uses an intelligent mesh network topology and has no master node. A message from node A to node C can be successfully delivered even if the two nodes are not within range providing that a third node B can communicate with nodes A and C. If the preferred route is unavailable, the message originator will attempt other routes until a path is found to the 'C' node. Therefore a Z-wave network can span much further than the radio range of a single unit. In order for Z-wave units to be able to route unsolicited messages, they cannot be in sleep mode. Therefore, most battery-operated devices will opt not to be repeater units. A Z-wave network can consist of up to 232 units with the option of bridging networks if more units are required.

Application areas
Due to the low bandwidth, Z-wave is not suitable for audio/video applications but is well suited for sensors and control units which typically only transmits a few bytes at a time.

Graphics tablet

A graphics tablet is a computer peripheral device that allows one to hand-draw images directly into a computer, generally through an imaging program. Graphics tablets consist of a flat surface upon which the user may 'draw' an image using an attached stylus, a pen-like drawing apparatus. The image generally does not appear on the tablet itself but, rather, is displayed on the computer monitor.

It is interesting to note that the stylus, as a technology, was originally designed as a part of the electronics, but later it simply took on the role of providing a smooth, but accurate 'point' that would not damage the tablet surface while 'drawing'.

Tablet PC

A tablet PC is a notebook- or slate-shaped mobile computer. Its touchscreen or digitizing tablet technology allows the user to operate the computer with a stylus or digital pen instead of a keyboard or mouse.

The form factor presents an alternate method of interacting with a computer, the main intent being to increase mobility and productivity. Tablet PCs are often used in places where normal notebooks are impractical or unwieldy, or do not provide the needed functionality.

The tablet PC is a culmination of advances in miniaturization of notebook hardware and improvements in integrated digitizers as methods of input. A digitizer is typically integrated with the screen, and correlates physical touch or digital pen interaction on the screen with the virtual information portrayed on it. A tablet's digitizer is an absolute pointing device rather than a relative pointing device like a mouse or touchpad. A target can be virtually interacted with directly at the point it appears on the screen.

Light Pen

A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with the computer's CRT monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy. A light pen can work with any CRT-based monitor, but not with LCD screens, projectors or other display devices.

A light pen is fairly simple to implement. The light pen works by sensing the sudden small change in brightness of a point on the screen when the electron gun refreshes that spot. By noting exactly where the scanning has reached at that moment, the X,Y position of the pen can be resolved. This is usually achieved by the light pen causing an interrupt, at which point the scan position can be read from a special register, or computed from a counter or timer. The pen position is updated on every refresh of the screen.

The light pen became moderately popular during the early 1980s. It was notable for its use in the Fairlight CMI, and the BBC Micro. However, due to the fact that the user was required to hold his or her arm in front of the screen for long periods of time, the light pen fell out of use as a general purpose input device.

Serial Attached SCSI

In computer hardware, Serial Attached SCSI (SAS) is a computer bus technology primarily designed for transfer of data to and from devices like hard disk, cd-rom and so on. SAS is a serial communication protocol for direct attached storage (DAS) devices. It is designed for the corporate and enterprise market as a replacement for parallel SCSI, allowing for much higher speed data transfers than previously available, and is backwards-compatible with SATA. Though SAS uses serial communication instead of the parallel method found in traditional SCSI devices, it still uses SCSI commands for interacting with SAS End devices. SAS protocol is developed and maintained by T10 committe. The current draft revision of SAS protocol can be downloaded from SAS 2 draft

Low voltage differential signaling (LVDS)


Low voltage differential signaling, or LVDS, is an electrical signaling system that can run at very high speeds over cheap, twisted-pair copper cables. It was introduced in 1994, and has since become very popular in computers, where it forms part of very high-speed networks and computer buses.
LVDS uses the difference in voltage between two wires to signal information. The transmitter injects a small current, nominally 3.5 milliamperes, into one wire or the other, depending on the logic level to be sent. The current passes through a resistor of about 100 to 120 ohms (matched to the characteristic impedance of the cable) at the receiving end, then returns in the opposite direction along the other wire. From Ohm's law, the voltage difference across the resistor is therefore about 350 millivolts. The receiver senses the polarity of this voltage to determine the logic level. (This is a type of current loop signaling).


MAC address

In computer networking a Media Access Control address (MAC address) is a unique identifier attached to most forms of networking equipment. Most layer 2 network protocols use one of three numbering spaces managed by the IEEE: MAC-48, EUI-48, and EUI-64, which are designed to be globally unique. Not all communications protocols use MAC addresses, and not all protocols require globally unique identifiers. The IEEE claims trademarks on the names 'EUI-48' and 'EUI-64'. (The 'EUI' stands for Extended Unique Identifier.)

ARP/RARP is commonly used to map the layer 2 MAC address to an address in a layer 3 protocol such as Internet Protocol (IP). On broadcast networks such as Ethernet the MAC address allows each host to be uniquely identified and allows frames to be marked for specific hosts. It thus forms the basis of most of the layer 2 networking upon which higher OSI Layer protocols are built to produce complex, functioning networks.

Native Command Queuing (NCQ)

Native Command Queuing (NCQ) is a technology designed to increase performance of SATAhard disks by allowing the individual hard disk to receive more than one I/O request at a time and decide which to complete first. Using detailed knowledge of its own seek times and rotational position, the drive can compute the best order to perform the operations. This can reduce the amount of unnecessary seeking (going back-and-forth) of the drive's heads, resulting in increased performance (and slightly decreased wear of the drive) for workloads where multiple simultaneous read/write requests are outstanding, most often occurring in server-type applications.

Tagged Command Queuing (TCQ)

TCQ stands for the Tagged Command Queuing technology built into certain PATA and SCSI hard drives. It allows the operating system to send multiple read and write requests to a hard drive. TCQ is almost identical in function to Native Command Queuing (NCQ) used by SATA drives.

Before TCQ, an operating system was only able to send one request at a time. In order to boost performance, it had to decide the order of the requests based on its own, possibly incorrect, idea of what the hard drive was doing. With TCQ, the drive can make its own decisions about how to order the requests (and in turn relieve the operating system from having to do so). The result is that TCQ can improve the overall performance of a hard drive.

Differential signaling

Differential signaling is a method of transmitting information over pairs of wires (as opposed to single-ended signalling, which transmits information over single wires).

Differential signaling reduces the noise on a connection by rejecting common-mode interference. Two wires (referred to here as A and B) are routed in parallel, and sometimes twisted together, so that they will receive the same interference. One wire carries the signal, and the other wire carries the inverse of the signal, so that the sum of the voltages on the two wires is always constant.

At the end of the connection, instead of reading a single signal, the receiving device reads the difference between the two signals. Since the receiver ignores the wires' voltages with respect to ground, small changes in ground potential between transmitter and receiver do not affect the receiver's ability to detect the signal. Also, the system is immune to most types of electrical interference, since any disturbance that lowers the voltage level on A will also lower it on B.

HyperTransport (HT)

HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a bidirectional serial/parallel high-bandwidth, low-latency computer bus that was introduced on April 2, 2001[1]. The HyperTransport Technology Consortium is in charge of promoting and developing HyperTransport technology. The technology is used by AMD and Transmeta in x86 processors, PMC-Sierra, Broadcom, and Raza Microelectronics in MIPS microprocessors, ATI Technologies, NVIDIA, VIA, SiS, ULi/ALi, AMD, Apple Computer and HP in PC chipsets, HP, Sun Microsystems, IBM, and IWill in servers, Cray, Newisys, and PathScale in high performance computing, and Cisco Systems in routers. Notably missing from this list is semiconductor giant Intel, which continues to use a shared bus architecture.

FireWire

FireWire (also known as i.Link or IEEE 1394) is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services. FireWire has replaced Parallel SCSI in many applications due to lower implementation costs and a simplified, more adaptable cabling system.

Almost all modern digital camcorders have included this connection since 1995. Many computers intended for home or professional audio/video use have built-in FireWire ports including all Macintosh, Dell and Sony computers currently produced. FireWire was also an attractive feature on the Apple iPod for several years, permitting new tracks to be uploaded in a few seconds and also for the battery to be recharged concurrently with one cable. However, Apple has eliminated FireWire support in favor of Universal Serial Bus (USB) 2.0 on its newer iPods due to space constraints and for wider compatibility.

Telestrator

The telestrator is a device that allows its operator to draw a freehand sketch over a motion picture image.

The telestrator was invented by physicist Leonard Reiffel, who used it to draw illustrations on a series of science shows he did for public television in the late 1960s. The user interface for early telestrators required the user to draw on a TV screen with a light pen, whereas modern implementations are commonly controlled with a touch screen or tablet PC.

Today telestrators are widely used in broadcasts of all major sports. They have also become a useful tool in televised weather reports.

Digital Micromirror Device

A Digital Micromirror Device, or DMD is an optical semiconductor that is the core of DLP projection technology, and was invented by Dr. Larry Hornbeck and Dr. William E. 'Ed' Nelson of Texas Instruments (TI) in 1987. The DMD project began as the Deformable Mirror Device in 1977, using micromechanical, analog light modulators. The first analog DMD product was the TI DMD2000 airline ticket printer that used a DMD instead of a laser scanner. A DMD chip has on its surface several hundred thousand microscopic mirrors arranged in a rectangular array which correspond to the pixels in the image to be displayed. The mirrors can be individually rotated ±10-12°, to an on or off state. In the on state, light from the bulb is reflected onto the lens making the pixel appear bright on the screen. In the off state, the light is directed elsewhere (usually onto a heatsink), making the pixel appear dark. To produce greyscales, the mirror is toggled on and off very quickly, and the ratio of on time to off time determines the shade produced (binary pulse-width modulation). Contemporary DMD chips can produce up to 1024 shades of gray. See DLP for discussion of how color images are produced in DMD-based systems. The mirrors themselves are made out of aluminum and are around 16 micrometres across. Each one is mounted on a yoke which in turn is connected to two support posts by compliant torsion hinges. In this type of hinge, the axle is fixed at both ends and literally twists in the middle. Because of the small scale, hinge fatigue is not a problem and tests have shown that even 1 trillion operations does not cause noticeable damage. Tests have also shown that the hinges cannot be damaged by normal shock and vibration, since it is absorbed by the DMD superstructure. Two pairs of electrodes on either side of the hinge control the position of the mirror by electrostatic attraction. One pair acts on the yoke and the other acts on the mirror directly. The majority of the time, equal bias charges are applied to both sides simultaneously. Instead of flipping to a central position as one might expect, this actually holds the mirror in its current position. This is because attraction force on the side the mirror is already tilted towards is greater, since that side is closer to the electrodes. To move the mirror, the required state is first loaded into an SRAM cell located beneath the pixel, which is also connected to the electrodes. The bias voltage is then removed, allowing the charges from the SRAM cell to prevail, moving the mirror. When the bias is restored, the mirror is once again held in position, and the next required movement can be loaded into the memory cell. The bias system is used because it reduces the voltage levels required to address the pixels such that they can be driven directly from the SRAM cell, and also because the bias voltage can be removed at the same time for the whole chip, meaning every mirror moves at the same instant. The advantages of the latter are more accurate timing and a more filmic moving image.

VIRTUAL SURGERY

Virtual surgery is a computer based simulated surgery, which can teach the surgeons new procedures and can determine their level of competence before they operate on patients. Virtual surgery is based on the concept of virtual reality.A simulated model of the human autonomy which look, feel and respond like a real human body is created for the surgeon to operate on.The virtual reality simulators consists of force feed back devices,a real time hapticcomputer, haptic software ,dynamic simulator and 3D-graphics.Using the 3D visualization technologies and thehaptic devices a surgery can be performed which enable the surgeon to reach into the virtual patient with their hands to touch, feel, grasp and manipulate the simulated organs.

Plasma Television

Television has been around since 19th century and for the past 50 years it held a pretty common place in our leaving room. Since the invention of television engineers have been striving to produce slim & flat displays that would deliver as good or even better images than the bulky C.R.T. Scores of research teams all over the world have been working to achieve this. Plasma television has achieved this goal. Technologies inside it are plasma and hi-definition which are just two of the latest technologies to hit stores. The main contenders in the flat race are PDP (Plasma Display Panel) and flat CRT with LCD and FED (Field Emission Display) To get an idea of what makes a plasma display different it needs to understand how a conventional TV set works. Conventional TV’s used CRT to create the images we see on the screen. The cathode is a heated filament, like the one in a light bulb. It is housed inside a vacuum created in a tube of thick glass….that is what makes your TV so big and heavy. The newest entrant in the field of flat panel display systems is Plasma display. Plasma display panels don’t contain cathode ray tubes and pixels are activated differently.

Femtotechnology

Femtotechnology is a term used by some futurists to refer to structuring of matter on a femtometre scale, by analogy with nanotechnology and picotechnology. This involves the manipulation of excited energy states within atomic nuclei to produce metastable (or otherwise stabilized) states with unusual properties. In the extreme case, excited states of nucleons are considered, ostensibly to tailor the behavioral properties of these particles (though this is in practice unlikely to work as intended).

Practical applications of femtotechnology are currently considered to be unlikely. The spacings between nuclear energy levels require equipment capable of efficiently generating and processing gamma rays, without equipment degradation. The nature of the strong interaction is such that excited nuclear states tend to be very unstable (unlike the excited electron states in Rydberg atoms), and there are a finite number of excited states below the nuclear binding energy, unlike the (in principle) infinite number of bound states available to an atom's electrons. Similarly, what is known about the excited states of individual nucleons seems to indicate that these do not produce behavior that in any way makes nucleons easier to use or manipulate, and indicates instead that these excited states are even less stable and fewer in number than the excited states of atomic nuclei.

The hypothetical hafnium bomb can be considered a crude application of femtotechnology.

Free space laser communication

Lasers have been considered for space communication since their realization in 1960. It was soon recognized that, although the laser had potential for the transfer data at potentially high rates.

Features of laser communication

Extremely high bandwidth and large information throughput available many times greater than RF communication. Modulation of helium-Neon laser (frequency 4.7 x 1014) results in a channel bandwidth of 4700 GHz, which is enough to carry a million simultaneous TV channel.

Small antenna size requires only a small increase in weight and volume of the satellite. This reduces blockage of fields of view of most desirable areas on satellites. Laser satellite communication equipment can provide advantages of 3:1 in mass and 2:1 in power relative to microwave systemsNarrow beam divergence affords interference free and secure operation. The existence of laser beams cannot be detected with spectrum analyzers. Antenna gain made possible by narrow beam, enables small telescope aperture to be used. The 1550nm-wavelength technology has added the advantage of being inherently eye-safe at the power levels used in the free space systems, alleviating the health and safety concerns often raised with using lasers in an open environment where human exposure is possible.Laser technology can meet the needs of a variety of space missions, including intersatellite links, Earth to near-space links, and deep space missions. The vast distances to deep space make data return via conventional radio frequency techniques extremely difficult.


Microphotonics

Microphotonics is a branch of technology that deals with directing light on a microscopic scale. It is used in optical networking.
Microphotonics employs at least two different materials with a large differential index of refraction to squeeze the light down to a small size. Generally speaking virtually all of microphotonics relies on Fresnel reflection to guide the light. If the photons reside mainly in the higher index material, the confinement is due to total internal reflection. If the confinement is due many distributed Fresnel reflections, the device is termed a photonic crystal. There are many different types of geometries used in microphotonics including: optical waveguides, optical microcavities, Arrayed Waveguide Gratings
Light bounces off the small yellow square that MIT physics professor John Joannopoulos is showing off. It looks like a scrap of metal, something a child might pick up as a plaything. But it isn't a toy, and it isn't metal. Made of a few ultrathin layers of non-conducting material, this photonic crystal is the latest in a series of materials that reflect various wavelengths of light almost perfectly. Photonic crystals are on the cutting edge of microphotonics: technologies for directing light on a microscopic scale that will make a major impact on telecommunications.

In the short term, microphotonics could break up the logjam caused by the rocky union of fiber optics and electronic switching in the telecommunications backbone. Photons barreling through the network's optical core run into bottlenecks when they must be converted into the much slower streams of electrons that are handled by electronic switches and routers. To keep up with the Internet's exploding need for bandwidth, technologists want to replace electronic switches with faster, miniature optical devices, a transition that is already under way Because of the large payoff-a much faster, all-optical Internet-many competitors are vying to create such devices. Large telecom equipment makers, including Lucent Technologies, Agilent Technologies and Nortel Networks, as well as a number of startup companies, are developing new optical switches and devices. Their innovations include tiny micromirrors, silicon waveguides, even microscopic bubbles to better direct light.
But none of these fixes has the technical elegance and widespread utility of photonic crystals. In Joannopoulos' lab, photonic crystals are providing the means to create optical circuits and other small, inexpensive, low-power devices that can carry, route and process data at the speed of light. 'The trend is to make light do as many things as possible,' Joannopoulos says. 'You may not replace electronics completely, but you want to make light do as much as you can.'
Conceived in the late 1980s, photonic crystals are to photons what semiconductors are to electrons, offering an excellent medium for controlling the flow of light. Like the doorman of an exclusive club, the crystals admit or reflect specific photons depending on their wavelength and the design of the crystal. In the 1990s, Joannopoulos suggested that defects in the crystals' regular structure could bribe the doorman, providing an effective and efficient method to trap the light or route it through the crystal.
Since then, Joannopoulos has been a pioneer in the field, writing the definitive book on the subject in 1995: Photonic Crystals: Molding the Flow of Light. 'That's the way John thinks about it,' says MIT materials scientist and collaborator Edwin Thomas. 'Molding the flow of light, by confining light and figuring out ways to make light do his bidding-bend, go straight, split, come back together-in the smallest possible space.'
Joannopoulos' group has produced several firsts. They explained how crystal filters could pick out specific streams of light from the flood of beams in wavelength division multiplexing, or WDM, a technology used to increase the amount of data carried per fiber ' TR March/April 1999). The lab's work on two-dimensional photonic crystals set the stage for the world's smallest laser and electromagnetic cavity, key components in building integrated optical circuits.
But even if the dream of an all-optical Internet comes to pass, another problem looms. So far, network designers have found ingenious ways to pack more and more information into fiber optics, both by improving the fibers and by using tricks like WDM. But within five to 10 years, some experts fear it won't be possible to squeeze any more data into existing fiber optics.
The way around this may be a type of photonic crystal recently created by Joannopoulos' group: a 'perfect mirror' that reflects specific wavelengths of light from every angle with extraordinary efficiency. Hollow fibers lined with this reflector could carry up to 1,000 times more data than current fiber optics-offering a solution when glass fibers reach their limits. And because it doesn't absorb and scatter light like glass, the invention may also eliminate the expensive signal amplifiers needed every 60 to 80 kilometers in today's optical networks Joannopoulos is now exploring the theoretical limits of photonic crystals. How much smaller can devices be made, and how can they be integrated into optical chips for use in telecommunications and, perhaps, ultrafast optical computers? Says Joannopoulos: 'Once you start being able to play with light, a whole new world opens up.'
Referance :
Wikipedia, Technology Review (MIT)

Astrophotography

Astrophotography is a specialised type of photography that entails making photographs of astronomical objects in the night sky such as planets, stars, and deep sky objects such as star clusters and galaxies.

Astrophotography is used to reveal objects that are too faint to observe with the naked eye, as both film and digital cameras can accumulate and sum photons over long periods of time.

Astrophotography poses challenges that are distinct from normal photography, because most subjects are usually quite faint, and are often small in angular size. Effective astrophotography requires the use of many of the following techniques:

  • Mounting the camera at the focal point of a large telescope
  • Emulsions designed for low light sensitivity
  • Very long exposure times and/or multiple exposures (often more than 20 per image).
  • Tracking the subject to compensate for the rotation of the Earth during the exposure
  • Gas hypersensitizing of emulsions to make them more sensitive (not common anymore)
  • Use of filters to reduce background fogging due to light pollution of the night sky.

Microvia Technology

Microvias are small holes in the range of 50 -100 µm. In most cases they are blind vias from the outer layers to the first innerlayer.
The development of very complex Integrated Circuits (ICs) with extremely high input/output counts coupled with the steadily increasing clock rates has forced the electronic manufacturer to develop new packaging and assembly techniques. Components with pitches less then 0.30 mm, chip scale packages, and flip chip technology are underlining this trend and highlight the importance of new printed wiring board technologies able to cope with the requirement of modern electronics.

In addition, more and more electronic devices have to be portable and consequently systems integration, volume and weight considerations are gaining importance.
These portables are usually battery powered resulting in a trend towards lower voltage power supplies, with their implication in PCB (Printed Circuit Board) complexity.

As a result of the above considerations, the future PCB will be characterized by very high interconnection density with finer lines and spaces, smaller holes and decreasing thickness. To gain more landing pads for small footprint components the use of microvias becomes a must.

DNA Based Computing

Biological and mathematical operations have some similarities, despite their respective complexities:


1. The very complex structure of a living being is the result of applying simple operations to initial information encoded in a DNA sequence;

2. The result f(w) of applying a computable function to an argument w can be obtained by applying a combination of basic simple functions to w.


For the same reasons that DNA was presumably selected for living organisms as a genetic material, its stability and predictability in reactions, DNA strings can also be used to encode information for mathematical systems.


To solve the Hamiltonian Path problem, the objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a 'non-deterministic polynomial time problem' (NP). NP problems are intractable with deterministic (conventional/serial) computers, but can be solved using non-deterministic (massively parallel) computers. A DNA computer is a type of non-deterministic computer. Dr. Leonard Adleman (1994) was struck with the idea of using sequences of stored nucleotides (Adenine (A), Guanine (G), Cytosine (C), Thymine (T)) in molecules of DNA to store computer instructions and data in place of the sequences of electrical, magnetic or optical on-off states (0, 1 – Boolean Logic) used in today’s computers. The Hamiltonian Path problem was chosen because it is known as 'NP-complete'; every NP problem can be reduced to a Hamiltonian Path problem.


The following algorithm solves the Hamiltonian Path problem:

1. Generate random paths through the graph.

2. Keep only those paths that begin with the start city (A) and conclude with the end city (G).

3. If the graph has n cities, keep only those paths with n cities. (n = 7)

4. Keep only those paths that enter all cities at least once.

5. Any remaining paths are solutions.

Unrestricted model of DNA computing is the key to solve the problem in five steps in the above algorithm. These operations can be used to 'program' a DNA computer.

o Synthesis of a desired strand

o Separation of strands by length

o Merging: pour two test tubes into one to perform union

o Extraction: extract those strands containing a given pattern

o Melting/Annealing: break/bond two ssDNA molecules with complementary sequences

o Amplification: use PCR to make copies of DNA strands

o Cutting: cut DNA with restriction enzymes

o Ligation: Ligate DNA strands with complementary sticky ends using ligase

o Detection: Confirm presence/absence of DNA in a given test tube

Since Adleman's original experiment, several methods to reduce error and improve efficiency have been developed. The Restricted model of DNA computing solves several physical problems with the Unrestricted model. The Restricted model simplifies the physical obstructions in exchange for some additional logical considerations. The purpose of this restructuring is to simplify biochemical operations and reduce the errors due to physical obstructions.


The Restricted model of DNA computing:

o Separate: isolate a subset of DNA from a sample

o Merging: pour two test tubes into one to perform union

o Detection: Confirm presence/absence of DNA in a given test tube

Despite these restrictions, this model can still solve NP-complete problems such as the 3-colourability problem, which decides if a map can be coloured with three colours in such a way that no two adjacent territories have the same colour. Error control is achieved mainly through logical operations, such as running all DNA samples showing positive results a second time to reduce false positives. Some molecular proposals, such as using DNA with a peptide backbone for stability, have also been recommended.

DNA computing brings great optimism to revolutionize the computer industry in the use of molecules of DNA in a computer, in place of electronics, circuits and magnetic or optical storage media. Obviously, to perform one calculation at a time (serial logic), DNA computers are not a viable option. However, if one wanted to perform many calculations simultaneously (parallel logic), a computer such as the one described above can easily perform 1014 million instructions per second (MIPS). DNA computers also require less energy and space. In DNA computers data are entered and coded into DNA by chemical reactions and retrieved by synthesizing a key data and make them react with existing strands. Here the key DNA will stick to the required DNA strands containing data.


In short, in a DNA computer, the input and output are both strands of DNA. Furthermore, a computer in which the strands are attached to the surface of a chip (DNA chip) can now solve difficult problems quite quickly

google search engine

Google