google search engine

Google
 

FireWire

FireWire (also known as i.Link or IEEE 1394) is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services. FireWire has replaced Parallel SCSI in many applications due to lower implementation costs and a simplified, more adaptable cabling system.

Almost all modern digital camcorders have included this connection since 1995. Many computers intended for home or professional audio/video use have built-in FireWire ports including all Macintosh, Dell and Sony computers currently produced. FireWire was also an attractive feature on the Apple iPod for several years, permitting new tracks to be uploaded in a few seconds and also for the battery to be recharged concurrently with one cable. However, Apple has eliminated FireWire support in favor of Universal Serial Bus (USB) 2.0 on its newer iPods due to space constraints and for wider compatibility.

Telestrator

The telestrator is a device that allows its operator to draw a freehand sketch over a motion picture image.

The telestrator was invented by physicist Leonard Reiffel, who used it to draw illustrations on a series of science shows he did for public television in the late 1960s. The user interface for early telestrators required the user to draw on a TV screen with a light pen, whereas modern implementations are commonly controlled with a touch screen or tablet PC.

Today telestrators are widely used in broadcasts of all major sports. They have also become a useful tool in televised weather reports.

Digital Micromirror Device

A Digital Micromirror Device, or DMD is an optical semiconductor that is the core of DLP projection technology, and was invented by Dr. Larry Hornbeck and Dr. William E. 'Ed' Nelson of Texas Instruments (TI) in 1987. The DMD project began as the Deformable Mirror Device in 1977, using micromechanical, analog light modulators. The first analog DMD product was the TI DMD2000 airline ticket printer that used a DMD instead of a laser scanner. A DMD chip has on its surface several hundred thousand microscopic mirrors arranged in a rectangular array which correspond to the pixels in the image to be displayed. The mirrors can be individually rotated ±10-12°, to an on or off state. In the on state, light from the bulb is reflected onto the lens making the pixel appear bright on the screen. In the off state, the light is directed elsewhere (usually onto a heatsink), making the pixel appear dark. To produce greyscales, the mirror is toggled on and off very quickly, and the ratio of on time to off time determines the shade produced (binary pulse-width modulation). Contemporary DMD chips can produce up to 1024 shades of gray. See DLP for discussion of how color images are produced in DMD-based systems. The mirrors themselves are made out of aluminum and are around 16 micrometres across. Each one is mounted on a yoke which in turn is connected to two support posts by compliant torsion hinges. In this type of hinge, the axle is fixed at both ends and literally twists in the middle. Because of the small scale, hinge fatigue is not a problem and tests have shown that even 1 trillion operations does not cause noticeable damage. Tests have also shown that the hinges cannot be damaged by normal shock and vibration, since it is absorbed by the DMD superstructure. Two pairs of electrodes on either side of the hinge control the position of the mirror by electrostatic attraction. One pair acts on the yoke and the other acts on the mirror directly. The majority of the time, equal bias charges are applied to both sides simultaneously. Instead of flipping to a central position as one might expect, this actually holds the mirror in its current position. This is because attraction force on the side the mirror is already tilted towards is greater, since that side is closer to the electrodes. To move the mirror, the required state is first loaded into an SRAM cell located beneath the pixel, which is also connected to the electrodes. The bias voltage is then removed, allowing the charges from the SRAM cell to prevail, moving the mirror. When the bias is restored, the mirror is once again held in position, and the next required movement can be loaded into the memory cell. The bias system is used because it reduces the voltage levels required to address the pixels such that they can be driven directly from the SRAM cell, and also because the bias voltage can be removed at the same time for the whole chip, meaning every mirror moves at the same instant. The advantages of the latter are more accurate timing and a more filmic moving image.

VIRTUAL SURGERY

Virtual surgery is a computer based simulated surgery, which can teach the surgeons new procedures and can determine their level of competence before they operate on patients. Virtual surgery is based on the concept of virtual reality.A simulated model of the human autonomy which look, feel and respond like a real human body is created for the surgeon to operate on.The virtual reality simulators consists of force feed back devices,a real time hapticcomputer, haptic software ,dynamic simulator and 3D-graphics.Using the 3D visualization technologies and thehaptic devices a surgery can be performed which enable the surgeon to reach into the virtual patient with their hands to touch, feel, grasp and manipulate the simulated organs.

Plasma Television

Television has been around since 19th century and for the past 50 years it held a pretty common place in our leaving room. Since the invention of television engineers have been striving to produce slim & flat displays that would deliver as good or even better images than the bulky C.R.T. Scores of research teams all over the world have been working to achieve this. Plasma television has achieved this goal. Technologies inside it are plasma and hi-definition which are just two of the latest technologies to hit stores. The main contenders in the flat race are PDP (Plasma Display Panel) and flat CRT with LCD and FED (Field Emission Display) To get an idea of what makes a plasma display different it needs to understand how a conventional TV set works. Conventional TV’s used CRT to create the images we see on the screen. The cathode is a heated filament, like the one in a light bulb. It is housed inside a vacuum created in a tube of thick glass….that is what makes your TV so big and heavy. The newest entrant in the field of flat panel display systems is Plasma display. Plasma display panels don’t contain cathode ray tubes and pixels are activated differently.

Femtotechnology

Femtotechnology is a term used by some futurists to refer to structuring of matter on a femtometre scale, by analogy with nanotechnology and picotechnology. This involves the manipulation of excited energy states within atomic nuclei to produce metastable (or otherwise stabilized) states with unusual properties. In the extreme case, excited states of nucleons are considered, ostensibly to tailor the behavioral properties of these particles (though this is in practice unlikely to work as intended).

Practical applications of femtotechnology are currently considered to be unlikely. The spacings between nuclear energy levels require equipment capable of efficiently generating and processing gamma rays, without equipment degradation. The nature of the strong interaction is such that excited nuclear states tend to be very unstable (unlike the excited electron states in Rydberg atoms), and there are a finite number of excited states below the nuclear binding energy, unlike the (in principle) infinite number of bound states available to an atom's electrons. Similarly, what is known about the excited states of individual nucleons seems to indicate that these do not produce behavior that in any way makes nucleons easier to use or manipulate, and indicates instead that these excited states are even less stable and fewer in number than the excited states of atomic nuclei.

The hypothetical hafnium bomb can be considered a crude application of femtotechnology.

Free space laser communication

Lasers have been considered for space communication since their realization in 1960. It was soon recognized that, although the laser had potential for the transfer data at potentially high rates.

Features of laser communication

Extremely high bandwidth and large information throughput available many times greater than RF communication. Modulation of helium-Neon laser (frequency 4.7 x 1014) results in a channel bandwidth of 4700 GHz, which is enough to carry a million simultaneous TV channel.

Small antenna size requires only a small increase in weight and volume of the satellite. This reduces blockage of fields of view of most desirable areas on satellites. Laser satellite communication equipment can provide advantages of 3:1 in mass and 2:1 in power relative to microwave systemsNarrow beam divergence affords interference free and secure operation. The existence of laser beams cannot be detected with spectrum analyzers. Antenna gain made possible by narrow beam, enables small telescope aperture to be used. The 1550nm-wavelength technology has added the advantage of being inherently eye-safe at the power levels used in the free space systems, alleviating the health and safety concerns often raised with using lasers in an open environment where human exposure is possible.Laser technology can meet the needs of a variety of space missions, including intersatellite links, Earth to near-space links, and deep space missions. The vast distances to deep space make data return via conventional radio frequency techniques extremely difficult.


Microphotonics

Microphotonics is a branch of technology that deals with directing light on a microscopic scale. It is used in optical networking.
Microphotonics employs at least two different materials with a large differential index of refraction to squeeze the light down to a small size. Generally speaking virtually all of microphotonics relies on Fresnel reflection to guide the light. If the photons reside mainly in the higher index material, the confinement is due to total internal reflection. If the confinement is due many distributed Fresnel reflections, the device is termed a photonic crystal. There are many different types of geometries used in microphotonics including: optical waveguides, optical microcavities, Arrayed Waveguide Gratings
Light bounces off the small yellow square that MIT physics professor John Joannopoulos is showing off. It looks like a scrap of metal, something a child might pick up as a plaything. But it isn't a toy, and it isn't metal. Made of a few ultrathin layers of non-conducting material, this photonic crystal is the latest in a series of materials that reflect various wavelengths of light almost perfectly. Photonic crystals are on the cutting edge of microphotonics: technologies for directing light on a microscopic scale that will make a major impact on telecommunications.

In the short term, microphotonics could break up the logjam caused by the rocky union of fiber optics and electronic switching in the telecommunications backbone. Photons barreling through the network's optical core run into bottlenecks when they must be converted into the much slower streams of electrons that are handled by electronic switches and routers. To keep up with the Internet's exploding need for bandwidth, technologists want to replace electronic switches with faster, miniature optical devices, a transition that is already under way Because of the large payoff-a much faster, all-optical Internet-many competitors are vying to create such devices. Large telecom equipment makers, including Lucent Technologies, Agilent Technologies and Nortel Networks, as well as a number of startup companies, are developing new optical switches and devices. Their innovations include tiny micromirrors, silicon waveguides, even microscopic bubbles to better direct light.
But none of these fixes has the technical elegance and widespread utility of photonic crystals. In Joannopoulos' lab, photonic crystals are providing the means to create optical circuits and other small, inexpensive, low-power devices that can carry, route and process data at the speed of light. 'The trend is to make light do as many things as possible,' Joannopoulos says. 'You may not replace electronics completely, but you want to make light do as much as you can.'
Conceived in the late 1980s, photonic crystals are to photons what semiconductors are to electrons, offering an excellent medium for controlling the flow of light. Like the doorman of an exclusive club, the crystals admit or reflect specific photons depending on their wavelength and the design of the crystal. In the 1990s, Joannopoulos suggested that defects in the crystals' regular structure could bribe the doorman, providing an effective and efficient method to trap the light or route it through the crystal.
Since then, Joannopoulos has been a pioneer in the field, writing the definitive book on the subject in 1995: Photonic Crystals: Molding the Flow of Light. 'That's the way John thinks about it,' says MIT materials scientist and collaborator Edwin Thomas. 'Molding the flow of light, by confining light and figuring out ways to make light do his bidding-bend, go straight, split, come back together-in the smallest possible space.'
Joannopoulos' group has produced several firsts. They explained how crystal filters could pick out specific streams of light from the flood of beams in wavelength division multiplexing, or WDM, a technology used to increase the amount of data carried per fiber ' TR March/April 1999). The lab's work on two-dimensional photonic crystals set the stage for the world's smallest laser and electromagnetic cavity, key components in building integrated optical circuits.
But even if the dream of an all-optical Internet comes to pass, another problem looms. So far, network designers have found ingenious ways to pack more and more information into fiber optics, both by improving the fibers and by using tricks like WDM. But within five to 10 years, some experts fear it won't be possible to squeeze any more data into existing fiber optics.
The way around this may be a type of photonic crystal recently created by Joannopoulos' group: a 'perfect mirror' that reflects specific wavelengths of light from every angle with extraordinary efficiency. Hollow fibers lined with this reflector could carry up to 1,000 times more data than current fiber optics-offering a solution when glass fibers reach their limits. And because it doesn't absorb and scatter light like glass, the invention may also eliminate the expensive signal amplifiers needed every 60 to 80 kilometers in today's optical networks Joannopoulos is now exploring the theoretical limits of photonic crystals. How much smaller can devices be made, and how can they be integrated into optical chips for use in telecommunications and, perhaps, ultrafast optical computers? Says Joannopoulos: 'Once you start being able to play with light, a whole new world opens up.'
Referance :
Wikipedia, Technology Review (MIT)

Astrophotography

Astrophotography is a specialised type of photography that entails making photographs of astronomical objects in the night sky such as planets, stars, and deep sky objects such as star clusters and galaxies.

Astrophotography is used to reveal objects that are too faint to observe with the naked eye, as both film and digital cameras can accumulate and sum photons over long periods of time.

Astrophotography poses challenges that are distinct from normal photography, because most subjects are usually quite faint, and are often small in angular size. Effective astrophotography requires the use of many of the following techniques:

  • Mounting the camera at the focal point of a large telescope
  • Emulsions designed for low light sensitivity
  • Very long exposure times and/or multiple exposures (often more than 20 per image).
  • Tracking the subject to compensate for the rotation of the Earth during the exposure
  • Gas hypersensitizing of emulsions to make them more sensitive (not common anymore)
  • Use of filters to reduce background fogging due to light pollution of the night sky.

Microvia Technology

Microvias are small holes in the range of 50 -100 µm. In most cases they are blind vias from the outer layers to the first innerlayer.
The development of very complex Integrated Circuits (ICs) with extremely high input/output counts coupled with the steadily increasing clock rates has forced the electronic manufacturer to develop new packaging and assembly techniques. Components with pitches less then 0.30 mm, chip scale packages, and flip chip technology are underlining this trend and highlight the importance of new printed wiring board technologies able to cope with the requirement of modern electronics.

In addition, more and more electronic devices have to be portable and consequently systems integration, volume and weight considerations are gaining importance.
These portables are usually battery powered resulting in a trend towards lower voltage power supplies, with their implication in PCB (Printed Circuit Board) complexity.

As a result of the above considerations, the future PCB will be characterized by very high interconnection density with finer lines and spaces, smaller holes and decreasing thickness. To gain more landing pads for small footprint components the use of microvias becomes a must.

DNA Based Computing

Biological and mathematical operations have some similarities, despite their respective complexities:


1. The very complex structure of a living being is the result of applying simple operations to initial information encoded in a DNA sequence;

2. The result f(w) of applying a computable function to an argument w can be obtained by applying a combination of basic simple functions to w.


For the same reasons that DNA was presumably selected for living organisms as a genetic material, its stability and predictability in reactions, DNA strings can also be used to encode information for mathematical systems.


To solve the Hamiltonian Path problem, the objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a 'non-deterministic polynomial time problem' (NP). NP problems are intractable with deterministic (conventional/serial) computers, but can be solved using non-deterministic (massively parallel) computers. A DNA computer is a type of non-deterministic computer. Dr. Leonard Adleman (1994) was struck with the idea of using sequences of stored nucleotides (Adenine (A), Guanine (G), Cytosine (C), Thymine (T)) in molecules of DNA to store computer instructions and data in place of the sequences of electrical, magnetic or optical on-off states (0, 1 – Boolean Logic) used in today’s computers. The Hamiltonian Path problem was chosen because it is known as 'NP-complete'; every NP problem can be reduced to a Hamiltonian Path problem.


The following algorithm solves the Hamiltonian Path problem:

1. Generate random paths through the graph.

2. Keep only those paths that begin with the start city (A) and conclude with the end city (G).

3. If the graph has n cities, keep only those paths with n cities. (n = 7)

4. Keep only those paths that enter all cities at least once.

5. Any remaining paths are solutions.

Unrestricted model of DNA computing is the key to solve the problem in five steps in the above algorithm. These operations can be used to 'program' a DNA computer.

o Synthesis of a desired strand

o Separation of strands by length

o Merging: pour two test tubes into one to perform union

o Extraction: extract those strands containing a given pattern

o Melting/Annealing: break/bond two ssDNA molecules with complementary sequences

o Amplification: use PCR to make copies of DNA strands

o Cutting: cut DNA with restriction enzymes

o Ligation: Ligate DNA strands with complementary sticky ends using ligase

o Detection: Confirm presence/absence of DNA in a given test tube

Since Adleman's original experiment, several methods to reduce error and improve efficiency have been developed. The Restricted model of DNA computing solves several physical problems with the Unrestricted model. The Restricted model simplifies the physical obstructions in exchange for some additional logical considerations. The purpose of this restructuring is to simplify biochemical operations and reduce the errors due to physical obstructions.


The Restricted model of DNA computing:

o Separate: isolate a subset of DNA from a sample

o Merging: pour two test tubes into one to perform union

o Detection: Confirm presence/absence of DNA in a given test tube

Despite these restrictions, this model can still solve NP-complete problems such as the 3-colourability problem, which decides if a map can be coloured with three colours in such a way that no two adjacent territories have the same colour. Error control is achieved mainly through logical operations, such as running all DNA samples showing positive results a second time to reduce false positives. Some molecular proposals, such as using DNA with a peptide backbone for stability, have also been recommended.

DNA computing brings great optimism to revolutionize the computer industry in the use of molecules of DNA in a computer, in place of electronics, circuits and magnetic or optical storage media. Obviously, to perform one calculation at a time (serial logic), DNA computers are not a viable option. However, if one wanted to perform many calculations simultaneously (parallel logic), a computer such as the one described above can easily perform 1014 million instructions per second (MIPS). DNA computers also require less energy and space. In DNA computers data are entered and coded into DNA by chemical reactions and retrieved by synthesizing a key data and make them react with existing strands. Here the key DNA will stick to the required DNA strands containing data.


In short, in a DNA computer, the input and output are both strands of DNA. Furthermore, a computer in which the strands are attached to the surface of a chip (DNA chip) can now solve difficult problems quite quickly

Direct to Home Television (DTH)

Direct to home (DTH) television is a wireless system for delivering television programs directly to the viewer’s house. In DTH television, the broadcast signals are transmitted from satellites orbiting the Earth to the viewer’s house. Each satellite is located approximately 35,700 km above the Earth in geosynchronous orbit. These satellites receive the signals from the broadcast stations located on Earth and rebroadcast them to the Earth.

The viewer’s dish picks up the signal from the satellite and passes it on to the receiver located inside the viewer’s house. The receiver processes the signal and passes it on to the television.

The DTH provides more than 200 television channels with excellent quality of reception along with teleshopping, fax and internet facilities. DTH television is used in millions of homes across United States, Europe and South East Asia.

Global System for Mobiles

A GSM network is composed of several functional entities, whose functions and interfaces are specified. Figure 1 shows the layout of a generic GSM network. The GSM network can be divided into three broad parts. The Mobile Station is carried by the subscriber. The Base Station Subsystem controls the radio link with the Mobile Station. The Network Subsystem, the main part of which is the Mobile services Switching Center (MSC), performs the switching of calls between the mobile users, and between mobile and fixed network users. The MSC also handles the mobility management operations. Not shown is the Operations and Maintenance Center, which oversees the proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates with the Mobile services Switching Center across the A interface.

Free Space Optics

Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers.


FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits. FSO success can be measured by its market numbers: forecasts predict it will reach a USS 2.5 billion market by 2006.

The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications.

FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.

An FSO system for local loop access comprises several laser terminals, each one residing at a network node to create a single, point-to-point link; an optical mesh architecture; or a star topology, which is usually point-to-multipoint. These laser terminals, or nodes, are installed on top of customers' rooftops or inside a window to complete the last-mile connection. Signals are beamed to and from hubs or central nodes throughout a city or urban area. Each node requires a Line-Of-Sight (LOS) view of the hub.

RFID


Radio Frequency Identification (RFID) is an automatic identification method, relying on storing and remotely retrieving data using devices called RFID tags or transponders. An RFID tag is a small object that can be attached to or incorporated into a product, animal, or person. RFID tags contain silicon chips and antennas to enable them to receive and respond to radio-frequency queries from an RFID transceiver. Passive tags require no internal power source, whereas active tags require a power source.

RFID tags can be either passive, semi-passive (also known as semi-active), or active.

Passive

Passive RFID tags have no internal power supply. The minute electrical current induced in the antenna by the incoming radio frequency signal provides just enough power for the CMOS integrated circuit (IC) in the tag to power up and transmit a response. Most passive tags signal by backscattering the carrier signal from the reader. This means that the aerial (antenna) has to be designed to both collect power from the incoming signal and also to transmit the outbound backscatter signal. The response of a passive RFID tag is not just an ID number (GUID): tag chip can contain nonvolatile EEPROM(Electrically Erasable Programmable Read-Only Memory) for storing data. Lack of an onboard power supply means that the device can be quite small: commercially available products exist that can be embedded under the skin. As of 2006, the smallest such devices measured 0.15 mm × 0.15 mm, and are thinner than a sheet of paper (7.5 micrometers).[4] The addition of the antenna creates a tag that varies from the size of postage stamp to the size of a post card. Passive tags have practical read distances ranging from about 2 mm (ISO 14443) up to a few meters (EPC and ISO 18000-6) depending on the chosen radio frequency and antenna design/size. Due to their simplicity in design they are also suitable for manufacture with a printing process for the antennae. Passive RFID tags do not require batteries, and can be much smaller and have an unlimited life span. Non-silicon tags made from polymer semiconductors are currently being developed by several companies globally. Simple laboratory printed polymer tags operating at 13.56 MHz were demonstrated in 2005 by both PolyIC (Germany) and Philips (The Netherlands). If successfully commercialized, polymer tags will be roll printable, like a magazine, and much less expensive than silicon-based tags.

Because passive tags are cheaper to manufacture and have no battery, the majority of RFID tags in existence are of the passive variety. As of 2005, these tags cost an average of Euro 0.20 ($0.24 USD) at high volumes.


Semi-passive

Semi-passive RFID tags are very similar to passive tags except for the addition of a small battery. This battery allows the tag IC to be constantly powered. This removes the need for the aerial to be designed to collect power from the incoming signal. Aerials can therefore be optimized for the backscattering signal. Semi-passive RFID tags are faster in response and therefore stronger in reading ratio compared to passive tags.


Active

Unlike passive and semi-passive RFID tags, active RFID tags (also known as beacons) have their own internal power source which is used to power any ICs and generate the outgoing signal. They are often called beacons because they broadcast their own signal. They may have longer range and larger memories than passive tags, as well as the ability to store additional information sent by the transceiver. To economize power consumption, many beacon concepts operate at fixed intervals. At present, the smallest active tags are about the size of a coin. Many active tags have practical ranges of tens of meters, and a battery life of up to 10 years.

TETRA

Terrestrial Enhanced(changed from European) Trunked Radio (TETRA) is a specialist Professional Mobile Radio and walkie talkie standard used by police, fire departments, ambulance and military. Its main advantage over technologies such as GSM are:
the much lower frequency used, which permits very high levels of geographic coverage with a smaller number of transmitters, cutting infrastructure cost.
fast call set-up - a one to many group call is generally set-up within 0.5 seconds compared with the many seconds that are required for a GSM network.
the fact that its infrastructure can be separated from that of the public cellphone network, and made substantially more diverse and resilient by the fact that base stations can be some distance from the area served.
unlike most cellular technologies, TETRA networks typically provide a number of fall-back modes such as the ability for a base station to process local calls in the absence of the rest of the network, and for 'direct mode' where mobiles can continue to share channels directly if the infrastructure fails or is out-of-reach.
gateway mode - where a single mobile with connection to the network can act as a relay for other nearby mobiles that are out of contact with the infrastructure.
TETRA also provides a point-to-point function that traditional analogue emergency services radio systems didn't provide. This enables users to have a one-to-one trunked 'radio' link between sets without the need for the direct involvement of a control room operator/dispatcher.
unlike the cellular technologies, which connects one subscriber to one other subscriber (one-to-one) then Tetra is built to do one-to-one, one-to-many and many-to-many. These operational modes are directly relevant to the public safety and professional users.
Radio aspects
TETRA uses a digital modulation scheme known as PI/4 DQPSK which is a form of phase shift keying. TETRA uses TDMA (see above). The symbol rate is 18,000 symbols per second, and each symbol maps to 2 bits. A single slot consists of 255 symbols, a single frame constist of 4 slots, and a multiframe (whose duration is approximately 1 second) consists of 18 frames. As a form of phase shift keying the downlink power is constant. The downlink (i.e. the output of the basestation) is a continuous transmission consisting of either specific communications with mobiles, synchronisation or other general broadcasts. Although the system uses 18 frames per second only 17 of these are used for traffic channel, with the 18th frame reserved for signalling or synchronisation. TETRA does not employ amplitude modulation. However, TETRA has 17.65 slots per second (18000 symbols/sec / 255 symbols/slot / 4 slots/frame), which is the cause of the PERCEIVED 'amplitude modulation' at 17Hz.

Orthogonal Frequency Division Multiple Access (OFDMA)


Orthogonal Frequency Division Multiple Access (OFDMA) is a multiple access scheme for OFDM systems. It works by assigning a subset of subcarriers to individual users.

OFDMA features


OFDMA is the 'multi-user' version of OFDM
Functions by partitioning the resources in the time-frequency space, by assigning units along the OFDM signal index and OFDM sub-carrier index
Each OFDMA user transmits symbols using sub-carriers that remain orthogonal to those of other users
More than one sub-carrier can be assigned to one user to support high rate applications
Allows simultaneous transmission from several users ⇒ better spectral efficiency
Multiuser interference is introduced if there is frequency synchronization error
The term 'OFDMA' is claimed to be a registered trademark by Runcom Technologies Ltd., with various other claimants to the underlying technologies through patents. It is used in the mobility mode of IEEE 802.16 WirelessMAN Air Interface standard, commonly referred to as WiMAX.

The SIDAC

The SIDAC, or SIlicon Diode for Alternating Current, is a semiconductor of the thyristor family. Also referred to as a SYDAC (Silicon thYristor for Alternating Current), bi-directional thyristor breakover diode, or more simply a bi-directional thyristor diode, it is technically specified as a bilateral voltage triggered switch. Its operation is identical to that of the DIAC; the distinction in naming between the two devices being subject to the particular manufacturer. In general, SIDACs have higher breakover voltages and current handling capacities than DIACs. The operation of the SIDAC is quite simple and is functionally identical to that of a spark gap or similar to two inverse parallel Zener diodes. The SIDAC remains nonconducting until the applied voltage meets or exceeds its rated breakover voltage. Once entering this conductive state, the SIDAC continues to conduct, regardless of voltage, until the applied current falls below its rated holding current. At this point, the SIDAC returns to its initial nonconductive state to begin the cycle once again. Somewhat uncommon in most electronics, the SIDAC is relegated to the status of a special purpose device. However, where part-counts are to be kept low, simple relaxation oscillators are needed, and the voltages are too low for practical operation of a spark gap, the SIDAC is an indispensable component.

DYNODE

A dynode is one of a series of electrodes within a photomultiplier tube. Each dynode is more positively charged than its predecessor. Secondary emission occurs at the surface of each dynode. Such an arrangement is able to amplify the tiny current emitted by the photocathode by typically one million.

Trisil

A Trisil is an electronic component designed to protect electronic circuits against overvoltage. Unlike a Transil it acts as a crowbar device, switching on when the voltage on it exceeds its breakover voltage.A Trisil is bipolar, behaving the same way in both directions. It is principally a voltage-controlled triac without gate. In 1982, the only manufacturer was Thomson SA. This type of crowbar protector is widely used for protecting telecom equipment from lightning induced transients and induced currents from power lines. Other manufacturers of this type of device include Bourns and Littelfuse. Rather than using the natural breakdown voltage of the device, an extra region is fabricated within the device to form a zener diode. This allows a much tighter control of the breakdown voltage. It is also possible to make gated versions of this type of protector. In this case, the gate is connected to the telecom circuit power supply (via a diode or transistor) so that the device will crowbar if the transient exceeds the power supply voltage. The main advantage of this configuration is that the protection voltage tracks the power supply, so eliminating the problem of selecting a particular breakdown voltage for the protection circuit.

Active pixel sensor


An active pixel sensor (APS) is an image sensor consisting of an integrated circuit containing an array of pixels, each containing a photodetector as well as three or more transistors. Since it can be produced by an ordinary CMOS process, APS is emerging as an inexpensive alternative to CCDs.
Architecture
Pixel
The standard CMOS APS pixel consists of three transistors as well as a photodetector.
The photodetector is usually a photodiode, though photogate detectors are used in some devices and can offer lower noise through the use of correlated double sampling. Light causes an accumulation, or integration of charge on the 'parasitic' capacitance of the photodiode, creating a voltage change related to the incident light.
One transistor, Mrst, acts as a switch to reset the device. When this transistor is turned on, the photodiode is effectively connected to the power supply, VRST, clearing all integrated charge. Since the reset transistor is n-type, the pixel operates in soft reset.
The second transistor, Msf, acts as a buffer (specifically, a source follower), an amplifier which allows the pixel voltage to be observed without removing the accumulated charge. Its power supply, VDD, is typically tied to the power supply of the reset transistor.
The third transistor, Msel, is the row-select transistor. It is a switch that allows a single row of the pixel array to be read by the read-out electronics.
Array
A typical two-dimensional array of pixels is organized into rows and columns. Pixels in a given row share reset lines, so that a whole row is reset at a time. The row select lines of each pixel in a row are tied together as well. The outputs of each pixel in any given column are tied together. Since only one row is selected at a given time, no competition for the output line occurs. Further amplifier circuitry is typically on a column basis.

hydrophone

A hydrophone is a sound-to-electricity transducer for use in water or other liquids, analogous to a microphone for air. Note that a hydrophone can sometimes also serve as a projector (emitter), but not all hydrophones have this capability, and may be destroyed if used in such a manner.The first device to be called a 'hydrophone' was developed when the technology matured, and used ultrasonic waves, which would provide for higher overall acoustic output, as well as increasing detection. The ultrasonic waves were produced by a mosaic of thin quartz crystals glued between two steel plates, having a resonant frequency of about 150 kHz. Contemporary hydrophones more often use barium titanate, a piezoelectric ceramic material, giving higher sensitivity than quartz. Hydrophones are an important part of the SONAR system used to detect submarines by both surface vessels and other submarines. A large number of hydrophones were used in the building of various fixed location detection networks such as SOSUS.

Mesotechnology

Mesotechnology describes a budding research field which could replace nanotechnology in the future as the primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length scales as opposed to nanotechnology which is concerned only with the smallest atomic scales.

describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the behavior of objects on the microscale and up. However, the length scale in the middle ( Although the term itself is still quite new, the general concept is not. Many fields of science have traditionally focused either on single discrete elements or large statistical collections where many theories have been successfully applied. In the field of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly, psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale.

Brain Computer Interface

Brain-Computer interface is a staple of science fiction writing. Init's earliest incarnations nomechanism was thought necessary, as the technology seemed so far fetched that no explanation was likely. As more became known about the brain however, the possibility has become more real and the science fiction more technically sophisticated. Recently, the cyberpunk movement has adopted the idea of 'jacking in', sliding 'biosoft' chips into slots implanted in the skull (Gibson, W. 1984).

Although such biosofts are still science fiction, there have been several recent steps toward interfacing the brain and computers. Chief among these are techniques for stimulating and recording from areas of the brain with permanently implanted electrodes and using conscious control of EEG to control computers.


Some preliminary work is being done on synapsing neurons on silicon transformers and on growing neurons into neural networks on top of computer chips.The most advanced work in designing a brain-computer interface has stemmed from the evolution of traditional electrodes. There are essentially two main problems, stimulating the brain (input) and recording from the brain (output).


Traditionally, both input and output were handled by electrodes pulled from metal wires and glass tubing.Using conventional electrodes, multi-unit recordings can be constructed from mutlibarrelled pipettes. In addition to being fragile and bulky, the electrodes in these arrays are often too far apart, as most fine neural processes are only .1 to 2 µm apart.



Pickard describes a new type of electrode, which circumvents many of the problems listed above. These printed circuit micro-electrodes (PCMs) are manufactured in the same manner of computer chips. A design of a chip is photoreduced to produce an image on a photosensitive glass plate. This is used as a mask, which covers a UV sensitive glass or plastic film.

A PCM has three essential elements:

1) the tissue terminals,

2) a circuit board controlling or reading from the terminals

3) a Input/Output controller-interpreter, such as a computer.

Artificial passenger

An artificial passenger (AP) is a device that would be used in a motor vehicle to make sure that the driver stays awake. IBM has developed a prototype that holds a conversation with a driver, telling jokes and asking questions intended to determine whether the driver can respond alertly enough. Assuming the IBM approach, an artificial passenger would use a microphone for the driver and a speech generator and the vehicle s audio speakers to converse with the driver.

The conversation would be based on a personalized profile of the driver. A camera could be used to evaluate the driver s facial state and a voice analyzer to evaluate whether the driver was becoming drowsy. If a driver seemed to display too much fatigue, the artificial passenger might be programmed to open all the windows, sound a buzzer, increase background music volume, or even spray the driver with ice water.

ShotCode

ShotCode is a circular barcode created by OP3. It uses a dartboard-like circle, with a bulls eye in the centre and datacircles surrounding it. The technology reads databits from these datacircles by measuring the angle and distance from the bulls eye for each.

ShotCodes are designed to be read with a regular camera (including those found on mobile phones and webcams) without the need to purchase other specialised hardware. Because of the circular design, it is also possible for software to detect the angle from which the barcode is read. ShotCodes differ from matrix barcodes in that they do not store regular data - rather, they store an encoded URL which the reading device can connect to in order to download said data.

Serial ATA (SATA)

In computer hardware, Serial ATA (SATA, IPA: /ˈsæta/ or /ˈseɪˌtə/) is a computer bus technology primarily designed for transfer of data to and from a hard disk. It is the successor to the legacy AT Attachmentretroactively renamedParallel ATA (PATA) to distinguish it from Serial ATA. Both SATA and PATA drives are IDE (Integrated Drive Electronics) drives, although IDE is often misused to indicate PATA drives.

standard (ATA). This older technology was

Serial ATA innovations

SATA drops the master/slave shared bus of PATA, giving each device a dedicated cable and dedicated bandwidth. While this requires twice the number of host controllers to support the same number of SATA devices, at the time of SATA's introduction this was no longer a significant drawback. Another controller could be added into a controller ASIC at little cost beyond the addition of the extra seven signal lines and printed circuit board (PCB) space for the cable header.

Features allowed for by SATA but not by PATA include hot-swappingnative command queueing. and

To ease their transition to SATA, many manufacturers have produced drives which use controllers largely identical to those on their PATA drives and include a bridge chip on the logic board. Bridged drives have a SATA connector, may include either or both kinds of power connectors, and generally perform identically to native drives. They may, however, lack support for some SATA-specific features. As of 2004, all major hard drive manufacturers produce either bridged or native SATA drives.

SATA drives may be plugged into Serial Attached SCSI (SAS) controllers and communicate on the same physical cable as native SAS disks. SAS disks, however, may not be plugged into a SATA controller.

Physically, the SATA power and data cables are the most noticeable change from Parallel ATA. The SATA standard defines a data cable using seven conductors and 8 mm wide wafer connectors on each end. SATA cables can be up to 1 m (39 in) long. PATA ribbon cables, in comparison, carry either 40- or 80-conductor wires and are limited to 46 cm (18 in) in length. The reduction in conductors makes SATA connectors and cables much narrower than those of PATA, thus making them more convenient to route within tight spaces and reducing obstructions to air cooling. Unlike early PATA connectors, SATA connectors are keyed — it is not possible to install cable connectors upside down without considerable force.

The SATA standard also specifies a power connector sharply differing from the four-pin Molex connector used by PATA drives and many other computer components. Like the data cable, it is wafer-based, but its wider 15-pin shape should prevent confusion between the two. The seemingly large number of pins are used to supply three different voltages if necessary — 3.3 V, 5 V, and 12 V. Each voltage is supplied by three pins ganged together (and 6 pins for ground). This is because the small pins cannot supply sufficient current for some devices, so they are combined. One pin from each of the three voltages is also used for hotplugging. The same physical connections are used on 3.5-in (90mm) and 2.5-in (70mm) (notebook) hard disks. Some SATA drives include a PATA-style 4-pin Molex connector for use with power supplies that lack the SATA power connector. Also, adaptors are available to convert a 4-pin Molex connector to SATA power connector.

External SATA

eSATA was standardized in mid-2004, with specifically defined cables, connectors, and signal requirements for external SATA drives. eSATA is characterized by:

  • Full SATA speed for external disks (115MB/s have been measured with external RAID enclosures)
  • No protocol conversion from PATA/SATA to USB/Firewire, all disk features are available to the host
  • Cable length is restricted to 2m, USB and Firewire span longer distances.
  • Minimum and maximum transmit voltage decreased to 400mV - 500mV
  • Minimum and maximum receive voltage decreased to 240mV - 500mV

USB and Firewire require conversion of all communication with the external disk, so external USB/Firewire enclosures include a PATA or SATA bridge chip that translates from the ATA protocol to USB or Firewire. Drive features like S.M.A.R.T. cannot be exploited that way and the achievable transfer speed with USB/Firewire is only about half of the entire bus data rate of about 50MB/s. This limited effective data transfer rate becomes very visible when using an external RAID array and also with fast single disks which may yield well over 70MB/s during real use.

Currently, most PC motherboards do not have an eSATA connector. eSATA may be enabled through the addition of an eSATA host bus adapter (HBA) or bracket connector for desktop systems or with a Cardbus or ExpressCard for notebooks.

Note: Prior to the final eSATA specification, there were a number of products designed for external connections of SATA drives. Some of these use the internal SATA connector or even connectors designed for other interface specifications, such as FireWire. These products are not eSATA compliant.

eSATA does not provide power, which means that external 2.5' disks which would otherwise be powered over the USB or Firewire cable need a separate power cable when connected over eSATA.


Night vision technology

Night vision technology was developed by the US defense department mainly for defense purposes ,but with the development of technology night vision devices are being used in day to day lives. In this seminar of mine I wish to bring out the various principles of working of these devices that have changed the outlook both on the warfront and in our common lives. Night Vision can work in two different ways depending on the technology used. 1.Image enhancement- This works by collecting the tiny amounts of light including the lower portion of the infrared light spectrum, those are present but may be imperceptible to our eyes, and amplifying it to the point that we can easily observe the image. 2:Thermal imaging- This technology operates by capturing the upper portion of the infrared light spectrum, which is emitted as heat by objects instead of simply reflected as light. Hotter objects, such as warm bodies, emit more of this light than cooler objects like trees or buildings.

RAID

In computing, the acronym RAID (originally redundant array of inexpensive disks, now also known as redundant array of independent disks) refers to a data storage scheme using multiple hard drives to share or replicate data among the drives. Depending on the version chosen, the benefit of RAID is one or more of increased data integrity, fault-tolerance, throughput or capacity compared to single drives. In its original implementations, its key advantage was the ability to combine multiple low-cost devices using older technology into an array that offered greater capacity, reliability, speed, or a combination of these things, than was affordably available in a single device using the newest technology.

At the very simplest level, RAID combines multiple hard drives into a single logical unit. Thus, instead of seeing several different hard drives, the operating system sees only one. RAID is typically used on server computers, and is usually (but not necessarily) implemented with identically sized disk drives. With decreases in hard drive prices and wider availability of RAID options built into motherboard chipsets, RAID is also being found and offered as an option in more advanced personal computers. This is especially true in computers dedicated to storage-intensive tasks, such as video and audio editing.

The original RAID specification suggested a number of prototype RAID levels , or combinations of disks. Each had theoretical advantages and disadvantages. Over the years, different implementations of the RAID concept have appeared. Most differ substantially from the original idealized RAID levels, but the numbered names have remained. This can be confusing, since one implementation of RAID 5, for example, can differ substantially from another. RAID 3 and RAID 4 are often confused and even used interchangeably.


Quantum computer

A Quantum Computer is a computer that harnesses the power of atoms and molecules to perform memory and processing tasks. It has the potential to perform certain calculations billions of times faster than any silicon-based computer In both the search for ever smaller and faster computational devices, and the search for a computational understanding of biological systems such as the brain, one is naturally led to consider the possibility of computational devices the size of cells, molecules, atoms, or on even smaller scales.

Indeed, it has been pointed out that if trends over the last forty years continue, we may reach atomic-scale computation by the year 2010. This move down in scale takes us from systems that can be understood (to a good enough approximation) using classical mechanics alone, to those which require a quantum mechanical understanding. Thus, it should not be surprising to find that the idea of quantum computation is not new. However, most if not all work so far has been understandably speculative.

QR Code

A QR Code is a matrix code (or two-dimensional bar code) created by Japanese corporation Denso-Wave in 1994. The QR is derived from Quick Response , as the creator intended the code to allow its contents to be decoded at high speed. QR Codes are most common in Japan, and are currently the most popular type of two dimensional code in Japan.

Although initially used for tracking parts in vehicle manufacturing, QR Codes are now used for inventory management in a wide variety of industries. More recently, the inclusion of QR Code reading software on camera phones in Japan has led to a wide variety of new, consumer-oriented applications, aimed at relieving the user of the tedious task of entering data into their mobile phone. QR Codes storing addresses and URLs are becoming increasingly common in magazines and advertisements in Japan. The addition of QR Codes on business cards is also becoming common, greatly simplifying the task of entering the personal details of a new acquaintance into the address book of one s mobile phone.

Sun Spot

Sun Spot (Sun Small Programmable Object Technology) is a wireless sensor network (WSN) mote developed by Sun Microsystems. The device is built upon the IEEE 802.15.4 standard. Unlike other available mote systems, the Sun SPOT is built on the Java 2 Micro Edition Virtual Machine (JVM).

Hardware
The completely assembled device should be able to fit in the palm of your hand.
Processing

180 MHz 32 bit ARM920T core - 512K RAM - 4M Flash
2.4 GHz IEEE 802.15.4 radio with integrated antenna
USB interface
Sensor Board
2G/6G 3-axis accelerometer
Temperature sensor
Light sensor
8 tri-color LEDs
6 analog inputs
2 momentary switches
5 general purpose I/O pins and 4 high current output pins

Networking
The motes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. This implementation of 802.15.4 is not ZigBee-compliant.
Software
The device s use of Java device drivers is particularly remarkable as Java is known for its ability to be hardware-independent. Sun SPOT uses a small J2ME which runs directly on the processor without an OS.

google search engine

Google