google search engine

Google
 

The Loop Emulation Sevice for Public Switch Telephone Network

It is a protocol that supports the delivery of POTS over the loop emulation service which helps to convey analog supervisory signaling states is commonly used by IWFs (interworking functions to support CCS ( Common Chanel Signalling). To avoid the signaling analog line states this protocol uses the simiar message set as the V5-PSTN (ETSI EN 300/324-1) thus covering the risk due to the variations seen in individual national implementation of analog POTS

FLUORESCENT MULTILAYER DISC (FMD)

Today the need for digital storage capacity is on increase, with a rate growth of 60% per annum. There is strong requirement for more storage facility for the amenities like the storage area networks, data warehouses, supercomputers and e-commerce related data mining as the volume of data to be processed is ever rising. The arrival of high bandwidth Internet and data-intensive applications such as high-definition TV (HDTV) and video & music on-demand, even smaller devices such as personal VCRs, PDAs, mobile phones etc will require multi-gigabyte and terabyte capacity in the next couple of years. This ever-increasing capacity demand can only be only managed by the steady increase in the areal density of the magnetic and optical recording media. In future, this density increase is feasible only by taking advantage of the shorter wavelength lasers, higher lens numerical aperture (NA) or by employing near-field techniques. This increase is best achieved with optical memory technologies. Fluorescent multiplayer disc (FMD) is a three dimensional storage that can store a large volume of data and is also capable of increasing the capacity of a given volume with an aim to achieve a cubic storage element having the dimensions of writing or reading laser wavelength. The current wavelength of 650 µm should be sufficient enough to store up to a Terabyte of data.

DIGITAL HUBBUB

UNDER THE HOOD

This topic brings out the elements which makes up digital hubbub. Just like a personal computer it also got a hardware side and a software side.

HARDWARE
At the core of a home entertainment hub are
Central Processing Unit
Digital signal processing chips
Hard disk drive
Universal serial bus port
PCMCIA connector
Ethernet jack
All these components are shown in the figure ( page )

· Central processing unit :- As in a computer system , CPU is the master of the hub. It deals with the data transfer that takes between different peripherals and hub. It checks on the parallel operations taking place in hub. It enroutes data packages to different operating units. t receives signals regarding the function to be done from the control panel or from a remote control Functions are like recording a video or writing an MP3 in a CD or retrieving the stored data. Based on the received signal the central processing unit generate signals which control other peripherals to perform the concerned operation.

· Digital signal processing chips :- The analog signals from various peripherals like a tv set or a tape recorder is received by analog to digital convertors . These digitized data is accessed by digital signal processing chips via their serial ports. These data streams are compressed for storage .For displays these stop data is expanded by the same digital signal processing chips. This processor has parallel operating functional units and this help in real time processing of data .

· Hard Disk Drive :- The hard disk drive is under a direct control of CPU via disk controller. As in any device a hard disk drive is used to store the data .The compressed data from the digital signal processing chips is written onto the hard disk drive and for displays the same data is accessed via CPU. It will require a capacity of several giga bytes even more than a 20 GB because a good video require a giga byte for an hour and good audio needs about a megabyte a minute.

· Universal Serial Bus (USB) :- The USB is a synchronous protocol that supports isochronous and asynchronous data and messaging transfers. This universal serial bus port is used to communicate data with portable MP3 music players , digital cameras etc.

· Personal Computer Memory card International Association (PCMCIA) :- PCMCIA cards are credit card size adapters which fit into PCMCIA slots found in most handheld and laptop computers. In order to fit into these small size drives, PCMCIA cards must meet very strict physical requirements. It is used in transferring data with non volatile memory cards or other devices.

· Ethernet Jack :- Hub requires communication with other personal computers as in a local area network.. Ethernet jack is the hardware used for the above said interface.










a magnifying look on

· dsp processor
· usb
· pcmcia


A436 PARALLEL VIDEO DSP CHIP
Features
· Overview
o Highly optimized and efficient, general purpose, very high performance, 512b advanced imaging parallel DSP and 32b RISC processor (no MMU) in a single chip in a single instruction stream
o Performance of 50,000 RISC MIPS for motion estimation and 3 billion MACS with only 100 MHz CPU clock
o Achieves very high performance with moderate CPU clock rate and main-stream fabrication
o Fully C software-programmable, parallel image processor optimized for real-time image/video processing/compression and RTOS
o Much faster, more efficient and easier to understand, optimize and use than other fast DSPs
o Directly software programmable in C as universal compressor (encoder)/decompressor (decoder) for multi-format, standards-based or proprietary image/video compression / streaming / decompression
o Provides fully software programmable video compression in real-time
o Provides powerful, flexible, fully software programmable, autonomous "smart" cameras that "watch" images themselves so people don't have to, providing "scene content analysis"
o Enhanced version (fourth generation Ax36 core) of proven A236 Video DSP Chip
o Scaled-down and reduced pin-count versions can be built on demand to satisfy less demanding applications requiring even-lower price points
· Easy to program - use C not microcode!
o Full software development environment includes C compiler, assembler, linker, loader, simulator and debugger
o Develop code using our parallel-enhanced ANSI-standard C compiler with assembly language output
o Simple parallel programming model supports parallel operations on structures using embedded parallel data types
o Use C for what it was intended for -- to provide tight control of the code generated
o You are in control - no need to embed assembly language in C programs or rely on precompiled subroutine libraries
o Evaluation board and evaluation "smart" camera board integrated with software development environment
o Supports modular development, licensing and protection of code by third-parties
o Open architecture with simple pipeline so efficient code can be written easily
o Simple but powerful 32b instruction set provides quick interrupt response time
o Tools automatically pack two scalar instructions into one 32b word when no parallel processing is needed, further reducing program size
o Three internal DMA controllers automatically build circular, multi-frame image/video buffers with programmable sizes in memory, providing a standardized format for video capture, processing and display
o uCLinux RTOS with TCP/IP and UDP/IP for Internet connectivity, and file system and device drivers (video input/output, IDE, USB, ethernet, PCMCIA)
· Very high useful performance (figures @ 100 MHz CPU clock)
o 32, 8b x 16b, or 16b x 16b, multiply-adds per CPU clock with 32b accumulation (3.2B MACS)
o 32 histogram, table look-up or zero-detection operations per CPU clock
o 32 x 16b, or 64 x 8b, plus one 32b, ALU operations per CPU clock
o 32 x 32b bit-realignment and 32 x 16b storage operations per CPU clock
o 64-point motion-estimation / pattern matching operation per CPU clock (50,000 RISC MIPS)
o Eight 4-point matrix-vector multiplies or convolutions per CPU clock (3.2B MACS)
o Four pairs of convolutions per CPU clock for implementing wavelets (3.2B MACS)
o 66 or 100 MHz CPU clock, and 100 or 133 MHz SDRAM clock
· Enhanced ports for imaging, multimedia and device control
o Three independent, asynchronously clocked, glueless, video-aware and packet capable, double-buffered parallel DMA ports, each data path is programmable as 8- or 16-bits wide, has video sync signals and supports live digitized video input or output, including all video buffering required, and also enable multiple A436's to work together for even higher performance
o Able to receive images from up to four image sensors, and produce a video output from them, simultaneously
o Able to provide high speed refresh to video output devices including video encoder chips, and field sequential color LCD microdisplays
o Bit-Programmable I/O port has 8 bit-programmable I/O pins for interface to switches, actuators, keypad and other devices; interrupts are fully programmable

o Buffered, full-duplex, Stereo Audio DMA Port provides handles 4-, 8- and 16-bit samples in little-endian and big-endian formats
o Buffered, full-duplex, programmable 2-wire/3-wire, Serial DMA Port controls/accesses external low speed and high speed devices including image sensors, video encoders and decoders, serial EEPROMs and flash memory cards
o Supports two video encoder/decoder chips and two audio CODECs simultaneously
o Two UART ports are fully programmable
o Two programmable interrupt timers
o Real time clock and CPU clock counter
o Internal debug device provides a built-in logic analyzer with trace capability for program debugging
o 32 MB address space; memory configuration is dynamically specified at boot-up
o Memory interface provides clock to SDRAM
o 5v-tolerant I/O and 3.3v SDRAM interface
USB BUS
The motivation behind the selection of USB for the Macintosh architecture is simple.
USB is a low-cost, high-speed peripheral expansion architecture that provides data transfer rates up to 12 Mbps.
The USB is a synchronous protocol that supports isochronous and asynchronous data and messaging transfers.
USB provides considerably faster data throughput for devices than does the Apple Desktop Bus (ADB) and the Macintosh modem and printer ports. This makes USB an excellent replacement solution for not only the existing slower RS-422 serial channels in the Macintosh today, but also the Apple Desktop Bus, and in some cases slower speed SCSI devices.
In addition to the obvious performance advantages, USB devices are hot pluggable and as such provide a true plug and play experience for computer users. USB devices can be plugged into and unplugged from the USB anytime without having to restart the system. The appropriate USB device drivers are dynamically loaded and unloaded as necessary by the Macintosh USB system software components to support hot plugging.

Better Device Expansion Model
The USB specification includes support for up to 127 simultaneously available devices on a single computer system. (One device is taken by the root hub.) To connect and use USB devices, it isn't necessary to open up the system and add additional expansion cards. Device expansion is accomplished with the addition of external USB multiport hubs. Hubs can also imbedded in USB devices like keyboards and monitors which provides device expansion in much the same way that the Apple Desktop Bus (ADB) is extended for the addition of a mouse through the keyboard or monitor. However, the USB implementation won't have the device expansion or speed limitations that ADB does.

Compact Connectors and Cables
USB devices utilize a compact 4-pin connector rather than the larger 8- to 25-pin connectors typically found on RS-232 and RS-422 serial devices. This results in smaller cables with less bulk. The compact USB connector provides two pins for power and two for data I/O. Power on the cable relieves hardware manufacturers of low-power USB devices from having to develop both a peripheral device and an external power supply, thereby reducing the cost of USB peripheral devices for manufacturers and consumers.



PCMCIA
Founded in 1990, the Personal Computer Memory Card International Association (PCMCIA), of which Quatech is a member, developed a set of standards by which additional memory could be added to portable systems. It soon became apparent that this same interface could be used to add I/O devices and hard disk drives as well, thereby dramatically increasing functionality of laptop computers.
Physical Characteristics
The PCMCIA specification 2.0 release in 1991 added protocols for I/O devices and hard disks. The 2.1 release in 1993 refined these specifications, and is the standard around which PCMCIA cards are built today.
PCMCIA cards are credit card size adapters which fit into PCMCIA slots found in most handheld and laptop computers. In order to fit into these small size drives, PCMCIA cards must meet very strict physical requirements as shown in Figure 6 below. There are three types of PCMCIA cards, Type I generally used for memory cards such as FLASH and STATIC RAM; Type II used for I/O peripherals such as serial adapters, parallel adapters, and fax-modems (this is the type of card Quatech manufactures); and Type III which are used for rotating media such as hard disks. The only difference in the physical specification for these cards is thickness.
.




SOFTWARE
As in a normal personal computer software checks on user interface , applications etc . Software of digital hubbub can be considered as a series of layers. In the innermost is an operating system that manages resources such as storage or CPU timing . The next layer is the middle ware that handles such house keeping details as displaying text and graphics on TV screens. The middleware interpret the input from different panel or remote control and it enables the CPU to generate signals according to the concerned function. It also deals with the communication with the cable that supplies the digital video and data strings . The outermost layer handles several applications .These applications includes recording controls program guides and onscreen signup for additional services and games and even web browsers. It provides a search engine that with only a few button pushes could find all movie musical starring, for instance Elvis Presley, or action dramas with Jackiechan, or new episodes of your favourite home improvement show.Software for digital hubbub is provided by mediabolic Inc (SAN FRANCISCO).

MIDDLE
WARE
INNER MOST LAYER
OUTER
LAYER
SOFTWARE

NEOLOGISM IN DIGITAL HUBBUB
As far as an electronic component market is considered the success of a product relies on the compactness and cheapness of the product. As any electronic device digital hubbub is also required to be compact and cheap. Compactness is brought about by implementing chips with multi-functions or in other way it can be said as compactness can be brought by merging 2 or 3 chips to do a single function. There fore several electronics firm are doing lot of research and development to bring about a much compact and cheap digital hubbub. A few affords to make the digital hubbub compact and cheap as illustrated below.
In April, Conexant Systems Inc. (Newport Beach, Calif., formerly Rockwell Semiconductor Systems) announced a chip that combines digital TV reception with a cable modem. It lets cable operators sell broadband interactive services in a low-cost package that includes 100-plus TV channels.
In another effort, Cirrus Logic Inc. (Austin, Texas) has among its chips a combined DVD and digital-video chipset that powers Samsung's PVR. And on the computercentric side, there's Linksys Group Inc. (Irvine, Calif.). Best known for its pocket routers (units that connect small home or office networks to the Internet), it has a new chip that combines routing circuitry with a cable modem and a wireless network access point. Such a chip could be built into a stand-alone digital hub or slotted into a PC acting as a home server.
Still, how do engineers cram what used to be thousands of dollars worth of video and computer equipment into an under-$500 box? By designing chips to do multiple duty, points out Anthony Simon, director of marketing for chip maker Conexant. For example, adding cable-modem functions to a video chip cuts between $20 and $40 from the cost of a set-top box.
And at PVR maker TiVo Inc. (Alviso, Calif.), product marketing director Ted Malone is proud of the subtle economies the company engineered into its custom disk-controller chip. The chip can read data streams from the disk surface in whatever order is most efficient for the head and then reassemble the information before handing it off to the video section.
Meanwhile, the price of hard-disk drives has put enormous volumes of storage within reach of even a run-of-the-mill set-top box. Currently, a 40-GB drive, which stores more than 50 hours of video, sells for about $80 retail and much less wholesale. Even a small fraction of that disk space can store dozens of hours of audio and thousands of digital photos
The evolving interface

A first step is an interface like TiVo’s where you peck out the name of the show on a virtual on- screen board: as you type each letter, an adjacent display of potential matching titles gets shorter until only a few choices remain. Once the right show is found,recording its eposides is a matter of pressing just a button or two.
Mox is simplifying matters further by mapping the letters most likely to be typed next to the numbers 1 through 9 on the remote’s keypad. Typing text on a numeric keypad will be familiar to the millions of people who send text messages by cellular phone.

Dream versus reality
Any build-up to a single home gateway that controls your television, air conditioning, and e-mail will not come overnight, according to Jakob Nielsen. People won't replace their VCR, DVD player, and home network all at once, he points out.
Thus far, barring a few exceptions such as "universal" remote controls and serial control inputs for some cable boxes, manufacturers still seem focused on locking consumers into a single supplier. Whether that philosophy can stand up to the ultimate purpose of a digital hub—connecting all the disparate entertainment devices a consumer may own and even replacing some of them—is probably the crucial question for the evolution of this new technology









REFERENCES

1. WWW.SPECTRUM.IEEE.ORG/DIGITAL HUBBUB
2. Www.oxfordmicrodevices.com/A436_summary.html
.FEATURES
3. IEEE SPECTRUM MAGAZINE JULY 2002

Aspect Oriented Programming

Introduction

“A project's efficiency increases if all the concerns are well modularized”
- Law of Demeter

Object-oriented programming (OOP) has been presented as a technology that can fundamentally aid software engineering, because the underlying object model provides a better fit with real domain problems. However most software systems consist of several concerns that crosscut multiple modules. Object-oriented techniques for implementing such concerns result in systems that are invasive to implement, tough to understand, and difficult to evolve. This forces the implementation of those design decisions to be scattered throughout the code, resulting in “tangled” code that is excessively difficult to develop and maintain. The new aspect-oriented programming (AOP) methodology facilitates modularization of crosscutting concerns. Using AOP, you can create implementations that are easier to design, understand, and maintain. Further, AOP promises higher productivity, improved quality, and better ability to implement newer features.








Evolution of the software process

Software design processes and programming languages exist in a mutually supporting relationship. Design processes break a system down into smaller and smaller units. Programming languages provide mechanisms that allow the programmer to define abstractions of system sub-units, and then compose those abstractions in different ways to produce the overall system. A design process and a programming language work well together when the programming language provides abstraction and composition mechanisms that cleanly support the kinds of units the design process breaks the system into.
In the early days of computer science, developers wrote programs by means of direct machine-level coding. Unfortunately, programmers spent more time thinking about a particular machine's instruction set than the problem at hand. Slowly, we migrated to higher-level languages that allowed some abstraction of the underlying machine. Then came structured languages; we could now decompose our problems in terms of the procedures necessary to perform our tasks. However, as complexity grew, we needed better techniques. Object-oriented programming (OOP) let us view a system as a set of collaborating objects. Classes allow us to hide implementation details beneath interfaces. Polymorphism provided a common behavior and interface for related concepts, and allowed more specialized components to change a particular behavior without needing access to the implementation of base concepts.

Programming methodologies and languages define the way we communicate with machines. Each new methodology presents new ways to decompose problems: machine code, machine-independent code, procedures, classes, and so on. Each new methodology allowed a more natural mapping of system requirements to programming constructs. Evolution of these programming methodologies let us create systems with ever increasing complexity. The converse of this fact may be equally true: we allowed the existence of ever more complex systems because these techniques permitted us to deal with that complexity.
Currently, OOP serves as the methodology of choice for most new software development projects. Indeed, OOP has shown its strength when it comes to modeling common behavior. However, as we will see shortly, and as you may have already experienced, OOP does not adequately address behaviors that span over many -- often unrelated -- modules. In contrast, AOP methodology fills this void. AOP quite possibly represents the next big step in the evolution of programming methodologies.









What is a concern

A concern is a particular goal, concept, or area of interest. In technology terms, a typical software system comprises several core and system-level concerns. For example, a credit card processing system's core concern would process payments, while its system-level concerns would handle logging, transaction integrity, authentication, security, performance, and so on. Many such concerns -- known as crosscutting concerns -- tend to affect multiple implementation modules. Using current programming methodologies, crosscutting concerns span over multiple modules, resulting in systems that are harder to design, understand, implement, and evolve.

Aspect-oriented programming (AOP) separates concerns better than previous methodologies, thereby providing modularization of crosscutting concerns.








Crosscutting concern problems
Although crosscutting concerns span over many modules, current implementation techniques tend to implement these requirements using one-dimensional methodologies, forcing implementation mapping for the requirements along a single dimension. That single dimension tends to be the core module-level implementation. The remaining requirements are tagged along this dominant dimension. In other words, the requirement space is an n-dimensional space, whereas the implementation space is one-dimensional. Such a mismatch results in an awkward requirements-to-implementation map.
Symptoms
A few symptoms can indicate a problematic implementation of crosscutting concerns using current methodologies.
Code tangling: Modules in a software system may simultaneously interact with several requirements. For example, oftentimes developers simultaneously think about business logic, performance, synchronization, logging, and security. Such a multitude of requirements results in the simultaneous presence of elements from each concern's implementation, resulting in code tangling.
Code scattering: Since crosscutting concerns, by definition, spread over many modules, related implementations also spread over all those modules. For example, in a system using a database, performance concerns may affect all the modules accessing the database.
Implications
Combined, code tangling and code scattering affect software design and developments in many ways:
Poor traceability: Simultaneously implementing several concerns obscures the correspondence between a concern and its implementation, resulting in a poor mapping between the two.
Lower productivity: Simultaneous implementation of multiple concerns shifts the developer's focus from the main concern to the peripheral concerns, leading to lower productivity.
Less code reuse: Since, under these circumstances, a module implements multiple concerns, other systems requiring similar functionality may not be able to readily use the module, further lowering productivity.
Poor code quality: Code tangling produces code with hidden problems. Moreover, by targeting too many concerns at once, one or more of those concerns will not receive enough attention.
More difficult evolution: A limited view and constrained resources often produce a design that addresses only current concerns. Addressing future requirements often requires reworking the implementation. Since the implementation is not modularized, that means touching many modules. Modifying each subsystem for such changes can lead to inconsistencies. It also requires considerable testing effort to ensure that such implementation changes have not caused .

The current response
Since most systems include crosscutting concerns, it's no surprise that a few techniques have emerged to modularize their implementation. Such techniques include mix-in classes, design patterns, and domain-specific solutions.
With mix-in classes, for example, you can defer a concern's final implementation. The primary class contains a mix-in class instance and allows the system's other parts to set that instance. For example, in a credit card processing example, the class implementing business logic composes a logger mix-in. Another part of the system could set this logger to get the appropriate logging type. For example, the logger could be set to log using a filesystem or messaging middleware. Although the nature of logging is now deferred, the composer nevertheless contains code to invoke logging operations at all log points and controls the logging information.
Behavioral design patterns, like Visitor and Template Method, let you defer implementation. However, just as in case with mix-ins, the control of the operation -- invoking visiting logic or invoking template methods -- stays with the main classes.
Domain-specific solutions, such as frameworks and application servers, let developers address some crosscutting concerns in a modularized way. The Enterprise JavaBeans (EJB) architecture, for example, addresses crosscutting concerns such as security, administration, performance, and container-managed persistence. Bean developers focus on the business logic, while the deployment developers focus on deployment issues, such as bean-data mapping to a database. The bean developer remains, for the most part, oblivious to the storage issues. In this case, you implement the crosscutting concern of persistence using an XML-based mapping descriptor.
The domain-specific solution offers a specialized mechanism for solving the specific problem. As a downside to domain-specific solutions, developers must learn new techniques for each such solution. Further, because these solutions are domain specific, the crosscutting concerns not directly addressed require an ad hoc response
The architect's dilemma
Good system architecture considers present and future requirements to avoid a patchy-looking implementation. Therein lies a problem, however. Predicting the future is a difficult task. If you miss future crosscutting requirements, you'll need to change, or possibly reimplement, many parts of the system. On the other hand, focusing too much on low-probability requirements can lead to an overdesigned, confusing, bloated system. Thus a dilemma for system architects: How much design is too much? Should I lean towards underdesign or overdesign?
In summary, the architect seldom knows every possible concern the system may need to address. Even for requirements known beforehand, the specifics needed to create an implementation may not be fully available. Architecture thus faces the under/overdesign dilemma.


The fundamentals of AOP
The scenario so far suggests that it can be helpful to modularize the implementation of crosscutting concerns. Researchers have studied various ways to accomplish that task under the more general topic of "separation of concerns." AOP represents one such method. AOP strives to cleanly separate concerns to overcome the problems discussed above.
AOP, at its core, lets you implement individual concerns in a loosely coupled fashion, and combine these implementations to form the final system. Indeed, AOP creates systems using loosely coupled, modularized implementations of crosscutting concerns. OOP, in contrast, creates systems using loosely coupled, modularized implementations of common concerns. The modularization unit in AOP is called an aspect, just as a common concern's implementation in OOP is called a class.

View the system as a set of concerns

We can view a complex software system as a combined implementation of multiple concerns. A typical system may consist of several kinds of concerns, including business logic, performance, data persistence, logging and debugging, authentication, security, multithread safety, error checking, and so on. You'll also encounter development-process concerns, such as comprehensibility, maintainability, traceability, and evolution ease. Figure 1 illustrates a system as a set of concerns implemented by various modules.
Figure 1. Implementation modules as a set of concerns


Figure 2 presents a set of requirements as a light beam passing through a prism. We pass a requirements light beam through a concern-identifier prism, which separates each concern. The same view also extends towards development-process concerns.
Figure 2. Concern decomposition: The prism analogy

Crosscutting concerns in a system
A developer creates a system as a response to multiple requirements. We can broadly classify these requirements as core module-level requirements and system-level requirements. Many system-level requirements tend to be orthogonal (mutually independent) to each other and to the module-level requirements. System-level requirements also tend to crosscut many core modules. For example, a typical enterprise application comprises crosscutting concerns such as authentication, logging, resource pooling, administration, performance, and storage management. Each crosscuts several subsystems. For example, a storage-management concern affects every stateful business object.



Development steps in AOP

1. Aspectual decomposition: Decompose the requirements to identify crosscutting and common concerns. Separate module-level concerns from crosscutting system-level concerns.
2. Concern implementation: Implement each concern separately. For the credit card processing example, you'd implement the core credit card processing unit, logging unit, and authentication unit.
3. Aspectual recomposition: In this step, an aspect integrator specifies recomposition rules by creating modularization units -- aspects. The recomposition process, also known as weaving or integrating, uses this information to compose the final system. For the credit card processing example, you'd specify, in a language provided by the AOP implementation, that each operation's start and completion be logged. You would also specify that each operation must clear authentication before it proceeds with the business logic.
AOP development stages
AOP differs most from OOP in the way it addresses crosscutting concerns. With AOP, each concern's implementation remains unaware that other concerns are "aspecting" it. For example, the credit card processing module doesn't know that the other concerns are logging or authenticating its operations. That represents a powerful paradigm shift from OOP.
Weaving example
The weaver, a processor, assembles an individual concern in a process known as weaving. The weaver, in other words, interlaces different execution-logic fragments according to some criteria supplied to it.
To illustrate code weaving, let's consider a our credit card processing system example. For brevity, consider only two operations: credit and debit. Also assume that a suitable logger is available.
Consider the following credit card processing module:
public class CreditCardProcessor { public void debit(CreditCard card, Currency amount) throws InvalidCardException, NotEnoughAmountException, CardExpiredException { // Debiting logic } public void credit(CreditCard card, Currency amount) throws InvalidCardException { // Crediting logic }}
Also, consider the following logging interface:
public interface Logger { public void log(String message);}
The desired composition requires the following weaving rules, expressed here in natural language (a programming language version of these weaving rules is provided later ):
1. Log each public operation's beginning
2. Log each public operation's completion
3. Log any exception thrown by each public operation
The weaver would then use these weaving rules and concern implementations to produce the equivalent of the following composed code:
public class CreditCardProcessorWithLogging { Logger _logger; public void debit(CreditCard card, Money amount) throws InvalidCardException, NotEnoughAmountException, CardExpiredException { _logger.log("Starting CreditCardProcessor.credit(CreditCard,Money) " + "Card: " + card + " Amount: " + amount); // Debiting logic _logger.log("Completing CreditCardProcessor.credit(CreditCard,Money) " + "Card: " + card + " Amount: " + amount); } public void credit(CreditCard card, Money amount) throws InvalidCardException { System.out.println("Debiting"); _logger.log("Starting CreditCardProcessor.debit(CreditCard,Money) " + "Card: " + card + " Amount: " + amount); // Crediting logic _logger.log("Completing CreditCardProcessor.credit(CreditCard,Money) " + "Card: " + card + " Amount: " + amount); }}





Anatomy of AOP languages
Just like any other programming methodology implementation, an AOP implementation consists of two parts: a language specification and an implementation. The language specification describes language constructs and syntax. The language implementation verifies the code's correctness according to the language specification and converts it into a form that the target machine can execute.
The AOP language specification At a higher level, an AOP language specifies two components:
Implementation of concerns: Mapping an individual requirement into code so that a compiler can translate it into executable code. Since implementation of concerns takes the form of specifying procedures, you can to use traditional languages like C, C++, or Java with AOP.
Weaving rules specification: How to compose independently implemented concerns to form the final system. For this purpose, an implementation needs to use or create a language for specifying rules for composing different implementation pieces to form the final system. The language for specifying weaving rules could be an extension of the implementation language, or something entirely different.

AOP language implementation
AOP language compilers perform two logical steps:
1. Combine the individual concerns
2. Convert the resulting information into executable code
An AOP implementation can implement the weaver in various ways, including source-to-source translation. Here, you preprocess source code for individual aspects to produce weaved source code. The AOP compiler then feeds this converted code to the base language compiler to produce final executable code. For instance, using this approach, a Java-based AOP implementation would convert individual aspects first into Java source code, then let the Java compiler convert it into byte code. The same approach can perform weaving at the byte code level; after all, byte code is still a kind of source code. Moreover, the underlying execution system -- a VM implementation, say -- could be aspect aware. Using this approach for Java-based AOP implementation, for example, the VM would load weaving rules first, then apply those rules to subsequently loaded classes. In other words, it could perform just-in-time aspect weaving.




AspectJ: An AOP implementation for Java
AspectJ, a freely available AOP implementation for Java from Xerox PARC, is a general-purpose aspect-oriented Java extension. AspectJ uses Java as the language for implementing individual concerns, and it specifies extensions to Java for weaving rules. These rules are specified in terms of pointcuts, join points, advice, and aspects. Join points define specific points in a program's execution, a pointcut is the language construct that specifies join points, advice defines pieces of an aspect implementation to be executed at pointcuts, and an aspect combines these primitives.
In addition, AspectJ also allows the "aspecting" of other aspects and classes in several ways. Indeed, we can introduce new data members and new methods, as well as declare a class to implement additional base classes and interfaces.
AspectJ's weaver -- an aspect compiler -- combines different aspects together. Because the final system created by the AspectJ compiler is pure Java byte code, it can run on any conforming JVM. AspectJ also features tools such as a debugger and selected IDE integration.
Below an AspectJ implementation of the logging aspect for the weaver described in natural language previously
public aspect LogCreditCardProcessorOperations { Logger logger = new StdoutLogger(); pointcut publicOperation(): execution(public * CreditCardProcessor.*(..)); pointcut publicOperationCardAmountArgs(CreditCard card, Money amount): publicOperation() && args(card, amount); before(CreditCard card, Money amount): publicOperationCardAmountArgs(card, amount) { logOperation("Starting", thisjoin point.getSignature().toString(), card, amount); } after(CreditCard card, Money amount) returning: publicOperationCardAmountArgs(card, amount) { logOperation("Completing", thisjoin point.getSignature().toString(), card, amount); } after (CreditCard card, Money amount) throwing (Exception e): publicOperationCardAmountArgs(card, amount) { logOperation("Exception " + e, thisjoin point.getSignature().toString(), card, amount); } private void logOperation(String status, String operation, CreditCard card, Money amount) { logger.log(status + " " + operation + " Card: " + card + " Amount: " + amount); } }
AOP benefits
AOP helps overcome the aforementioned problems caused by code tangling and code scattering. Here are other specific benefits AOP offers:
Modularized implementation of crosscutting concerns: AOP addresses each concern separately with minimal coupling, resulting in modularized implementations even in the presence of crosscutting concerns. Such an implementation produces a system with less duplicated code. Since each concern's implementation is separate, it also helps reduce code clutter. Further, modularized implementation also results in a system that is easier to understand and maintain.
Easier-to-evolve systems: Since the aspected modules can be unaware of crosscutting concerns, it's easy to add newer functionality by creating new aspects. Further, when you add new modules to a system, the existing aspects crosscut them, helping create a coherent evolution.
Late binding of design decisions: Recall the architect's under/overdesign dilemma. With AOP, an architect can delay making design decisions for future requirements, since she can implement those as separate aspects.
More code reuse: Because AOP implements each aspect as a separate module, each individual module is more loosely coupled. For example, you can use a module interacting with a database in a separate logger aspect with a different logging requirement.
In general, a loosely coupled implementation represents the key to higher code reuse. AOP enables more loosely coupled implementations than OOP.

AOP statistics
1. On reengineering a particular image processing program the following results were obtained


2. In the original FreeBSD v3.3 code, the implementation of prefetching is both scattered and tangled. The code is spread out over approximately 260 lines in 10 clusters in 5 core functions from two subsystems. On using AOP the resulting code was spread over two pages of 50 lines each.




AOP applications
a-kernel is an effort to apply AOP techniques to operating system kernel code.
FACET is a framework for a customizable real-time event channel that uses aspects.
Lasagne is an aspect-oriented architecture for context-sensitive and run-time weaving of aspects--prototypes have been developed in Java and Correlate.
QuO is a middleware framework and toolkit which includes aspect-oriented languages for developing quality of service enabled, adaptive distributed object applications.
SoC+MAS is a research project whose main focus is the application of advanced separation of concerns techniques to handle the complexity of designing and implementing multi-agent OO software.
SADES is a customizable and extensible object database evolution system implemented using AOP techniques

Chameleon chip (civil)

Chameleon Systems Inc, San Jose, California is one of the new breed of reconfigurable processor makers who claim that Chameleon chip which is its first Reconfigurable Communications Processor (RCP) is a reconfigurable processor which provides a design environment that allows customer to convert their algorithms to hardware configuration on the fly.

Advantages

1. Early and fast designs
2. Enabling Field upgrades
3. Creating product differentiation for suppliers
4. Creating flexible & adaptive products
5. Reducing power
6. Reducing manufacturing costs
7. Increasing bandwidths

Disadvantages

1. Inertia – Engineers slow to change
2. RCP designs requires comprehensive set of tools
3. 'Learning curve' for designers unfamiliar with reconfigurable logic

Applications

1. Wireless Base stations
2. Packetized voice(VOIP)
3. Digital Subscriber Line(DSL)
4. Software Defined Radio(SDR)


Quantum Dots

Quantum Dot fabrication techniques for QD diode lasers employ self-organized growth of uniform nanometer-scale islands of InGaAs on the surface of GaAs or InP. Under the proper choice of deposition conditions, a layer of material with a lattice constant different from that of the substrate may spontaneously transform to an array of three-dimensional islands. The size of these islands provides quantization in all three directions making them Quantum Dots.

Diode lasers based on InAs/GaAs Quantum Dots developed by Innolume and Zia Lasers (acquired by Innolume in December 2006) are uniquely positioned to serve silcon photonics as light sources in order to bring optical interconnect technologies to the mainstream computer applications.


  • Three-dimensional nature of quantization of electrons and holes in these islands provides significantly improved temperature stability of laser output power basic characteristics. Fully temperature independent operation has been demonstrated eliminating the need of expensive control schemes.

  • QD lasers demonstrate extremely low intensity noise, being an ideal CW source of light for high speed external modulation.

  • Owing to inhomogeneous broadening caused by size distribution of QDs, the effect of spectral hole burning can be controllably used to form very broad lasing spectrum (>80 nm) with uniform intensity distribution. These broad-band or “white” lasers can be used in WDM silicon photonics systems.

  • Another effect of broad gain spectrum of QD lasers is very stable mode-locking regime with high peak power, which has been unachievable. Mode-locked laser is a source of clocking in future silicon chips with high clocking frequency enabled by optical clock technology.

  • The emission range of InGaAs/GaAs QD lasers of 1.064 – 1.31 micrometer fits well the window of transparency of silicon based waveguides.


QD lasers arm Silicon Photonics to deliver cost efficient solutions for future Optical Interconnect and Optical Clock Systems.

Innolume has also developed a wide portfolio of laser

for specific wavelength range between 1.064 and 1.31 micron for medical, sensing and other applications.


top

Silicon Photonics

As computing and networking performance continue on their exponential growth track, defined by Moore’s Law, the exponentially increasing communication needs will soon exceed the limits of copper wiring. Communications links, or interconnects, are the biggest bottleneck in networks and computers. For example, the next generation of Ethernet runs at 10 Gb/s, and at this speed electrical signals in copper wires can only travel a small distance before fading out completely.

Optical fiber on the other hand is the ideal medium for communications over most distances. The fiber itself is very cheap, and light travels through it for miles even when launched with tiny amounts of power. Optical fiber also has the capability to carry data at rates up to one thousand times faster than 10Gb/s. At each end of the fiber, an optical transmitter/receiver (“transceiver”) is required to interface to the computer or switch. Unfortunately, these optical transceivers currently are extremely expensive. The typical cost of data communications today runs about $100/Gb/s. As a result, optical fiber communication has been largely confined to the capital-intensive long distance telecommunications infrastructure.

Fortunately, Silicon Photonics technology shows promises of delivering low cost seamless optical connectivity from hundreds of meter distances at the network level all the way down to millimeters distances for inter and intra-chip communication. The cost of Silicon Photonics is expected to reach well under $1/Gb/s, many times cheaper than typical data communication links.
Within 10 years, the established approach of using electricity in copper wiring just won’t work, and the ideal approach of using light in optical fiber is just simply too expensive. Only low cost disruptive technology can tip the balance from copper wiring to fiber optics to allow the computing and networking performance to continue on an exponential growth path. Silicon Photonics can fulfill this role.

Since silicon is not an efficient electrically pumped laser material, most silicon photonic solutions need a steady source, or Continuous Wave (CW), of laser light to power the interconnection. This source can be a typical laser based on III-V substrates such as GaAs and InP. The data transfer from electrical to optical occurs in a modulator, in which a voltage applied to a silicon photonic modulator will change the amount of light transmitted. Similarly, data on a light stream is converted back into an electrical current in a silicon photonic detector. Electronic drivers and receivers on each end of the path help with the signal quality. Finally, for increased total data rate and lower cost, it’s best to have many communication channels combined or wavelength division multiplexed (WDM), onto one fiber or waveguide. These modulator, detector and WDM elements can be integrated together on one Si photonic chip for best performance and lowest cost.

The cost of most silicon photonic devices can be relatively low, like that of silicon electronics. Therefore, the majority of the cost of silicon photonic interconnects will be in the source lasers that must meet tough specifications. These lasers will need to emit high power with low noise at wavelengths that are transparent in Silicon, above 1.1 micrometers. Also, for increased total bandwidth and cost efficiency, a preferred solution would send multiple data channels on multiple wavelengths on one fiber, called wavelength division multiplexing, or WDM. The laser also must operate in a very harsh environment, perhaps from below 0 C to over 100 C.

Innolume’s lasers based on are uniquely qualified to address these needs for silicon photonics.

Quantum cryptography becomes a reality [civil]

According to reliable sources from NEC, Commercial quantum cryptography, a revolutionary system that can produce quantum keys at a speed of 100Kbit/s and then broadcast it up to 40 kilometres along the commercial fibre optic lines will be available in the markets by the second half of 2005. Speaking in line with Kazuo Nakamura, senior manager of NEC's quantum information technology group at the company's Fundamental and Environmental Research Laboratories, it can be considered as a world record as it is a rare blend of speed and distance. As put by Akio Tajima, the assistant manager at the laboratory, this innovative concept has gone through several improvisations after it was successfully tested in April at the company’s laboratories in Tokyo.

The system permits the users to swap the keys with a prior idea that they have not been disordered up during the transmission. The whole system works on the concept that the system works by implanting the encryption key on photons, which can be either in the receiver end or with an eavesdropper, as the photons cannot be cracked. Akio Tajima said that until last April the round-trip’ quantum cryptography method at NEC where it had a laser as well as a receiver at one end and also a mirror at the other end, faced some troubles regarding the high speed over long distances.

Earlier the detector that turns the photons to electrons once they collide with it functioned very slowly. This created a problem in registering these photons, as there will be an avalanche of electrons with every collision. The team lead by Tajima has rectified that disadvantage now by developing a new detector that can work reliably at 100Kbit/s. This fast pace helps in clearing this whole bunch of electrons produced by the collision from the device, quickly so that they can register the next photon. The NEC scientists have also rectified the problem with the mirror used earlier in the system called the faraday mirror. The performance of this mirror, which can reflect the light in a 90-degree rotation from the input light, changes with temperature leading to quality loss. NEC today has improvised this concept of mirror, by producing a mirror that works efficiently with temperature variations.

Another advantage of NEC system is that it has a conventional laser, which can transmit the photons through the fibre optic cables over a long distance with very less noise. Although there were powerful lasers that could trigger the propagation of photons over long distances, they all resulted in more noises leading to efficiency loss. According to Nakamura 'This is the world's fastest key generation technology at 40 kilometres'. He confirms his statement with various proofs. He said that the University of Geneva has achieved quantum transmission over a distance of over 60 kilometres, but at a much lower speed, while a system developed by Japan's National Institute of Advanced Industrial Science and Technology, a major government laboratory, has achieved nearly the same speed as NEC's system, but only at about half the distance.

According to Toshiyuki Kanoh, chief manager of the company's System Platforms Research Laboratories, this break through system invented in collaboration with the Japan Science and Technology Agency's Exploratory Research for Advanced Technology and Japan's National Institute of Information and Communications Technology, will take an year to be launched in the commercial market as it’s software is still on the developing stage. He also added that they are going to create a commercial market for the system which it lacks now and is expecting the police, banks and financial institutions etc to be it’s clients by the mid of 2005. There is also a move to demonstrate this system in various exhibitions and seminars.


FLUORESCENT MULTILAYER DISC (FMD) [civil]

Today the need for digital storage capacity is on increase, with a rate growth of 60% per annum. There is strong requirement for more storage facility for the amenities like the storage area networks, data warehouses, supercomputers and e-commerce related data mining as the volume of data to be processed is ever rising. The arrival of high bandwidth Internet and data-intensive applications such as high-definition TV (HDTV) and video & music on-demand, even smaller devices such as personal VCRs, PDAs, mobile phones etc will require multi-gigabyte and terabyte capacity in the next couple of years.

This ever-increasing capacity demand can only be only managed by the steady increase in the areal density of the magnetic and optical recording media. In future, this density increase is feasible only by taking advantage of the shorter wavelength lasers, higher lens numerical aperture (NA) or by employing near-field techniques. This increase is best achieved with optical memory technologies.

Fluorescent multiplayer disc (FMD) is a three dimensional storage that can store a large volume of data and is also capable of increasing the capacity of a given volume with an aim to achieve a cubic storage element having the dimensions of writing or reading laser wavelength. The current wavelength of 650 µm should be sufficient enough to store up to a Terabyte of data.


FLUORESCENT MULTILAYER DISC (FMD) [civil]

ZFS [civil]

Todays file systems, which the system administrators observe to be always in the verge of data corruption and more over find it extremely difficult to manage due to its slow rate of execution has enabled ZFS to emerge as one of the most powerful file system. Used in Suns Solaris 10 Operating System (Solaris OS), ZFS finds its edge over the other file systems by its unique features of

  • Cutting short the administrative difficulties by 80 percent by automating and combining complicated storage administration concepts.
  • It ensures the data integrity and safety of all data with 64-bit checksums that can detect and correct silent data corruption.
  • It offers more scalability by providing 16 billion times storage of 32 or 64 bit systems.
  • High performance gains are achieved by using the concept of transactional object model that removes most of the traditional constraints on the order of issuing I/Os.


Smart materials and Smart structures [civil]

A new generation of materials called smart materials is changing the way a structural system is going to be designed, built and monitored. Advances in composite materials, instrumentation and sensing technology (fiber-optic sensors) in combination with a new generation of actuator systems based on Piezoelectric ceramics and shape Memory Alloys have made this possible.


Shape memory alloys have found applications in a variety of high performance products, ranging from aircraft hydraulic coupling and electrical connectors to surgical suture anchors. Since the material can generate high actuation forces in response to temperature changes, shape memory alloys have the potential to serve as an alternative to solenoids, special significance in the area of smart structures because it offers significant advantages over conventional actuations technologies in number of ways.


  • Large amounts of recoverable strains offer very high work densities. This is very attractive in situations where the work output to weight ration is critical.
  • Direct actuation without moving part there by increasing the efficiency.
  • Large available strains permits long strokes with constant force output.
  • The actuation can be linear or rotatory.


google search engine

Google