Khanderao on Service Component Architecture
The challenge is to find a way to program the many cores simultaneously.
Current desktop machines have up to four separate cores, while the Cell processor inside the PlayStation 3 has eight (seven of them useable). Each core is effectively a programmable chip in its own right.
But to take advantage of the extra processing power, programmers need to gives instructions to each core that work in parallel with one another.
There are already specialist chips with multiple cores – such as those used in router hardware and graphics cards – but Dr Mark Bull, at the Edinburgh Parallel Computing Centre, said multi-core chips were forcing a sea-change in the programming of desktop applications.
“It’s not too difficult to find two or four independent things you can do concurrently, finding 80 or more things is more difficult, especially for desktop applications.
“It is going to require quite a revolution in software programming.
“Massive parallelism has been the preserve of the minority – a few people doing high-performance scientific computing.
What is interesting about this is that not only are most application developers not prepared for concurrent programming, they are not really aware of the issue and there is almost no discussion in industry forums on the subject.
The BETA program is expected to last for several weeks, with FireStorm/DAO 3.2 planned for generally availability in April 2008.
FireStorm/DAO is a database access tool that adopts a pragmatic approach of generating Java source code for data persistence that is a direct mapping of a particular relational database schema. It is also possible to define complex multi-table queries and to leverage existing database logic contained within stored procedures.
FireStorm/DAO is based on the Data Access Object design pattern and is available in Enterprise, Architect, and OEM editions. FireStorm/DAO Architect Edition allows new custom code generation templates to be developed and integrated with the FireStorm/DAO Studio environment. FireStorm/DAO Architect Edition includes the source code for the Java code generation templates. The code generation templates are written in Java, which means that Java developers have a very short learning curve before they can start customizing the code generation.
Additional information on FireStorm/DAO is available here:
FireStorm/DAO is available for download here:
CodeFutures is the leading supplier of database performance tools. CodeFutures’ database access tool, the award-winning FireStorm/DAO, makes Java software developers more productive by generating Java DAO ( Data Access Object) code for accessing relational databases. The benefits provided by CodeFutures’ database access tools are higher developer productivity, better software quality, and lower maintenance costs.CodeFutures’ products are used in hundreds of companies such as Turner Broadcasting, Lehman Brothers, JP Morgan, Wells Fargo Bank, Walt Disney, Kraft Foods, T-Systems, FedEx, Bed Bath and Beyond, Lockheed Martin, Suzuki, EMC, Macromedia, and Siemens.
This means that open source projects can benefit from the Network Effect. The more diverse projects and the more code, the more useful the repository becomes.
David Chappell speculates why there is not too much attention to this aspect of SCA (and SDO).
The reason for this might be political, as promoting a replacement for key parts of Java EE 5 is bound to be contentious. It might also stem from people’s natural enthusiasm for new technology, such as SCA’s assembly mechanism, over a simplification of things that are already available.
It is important not to forget the complexity introduced by handling data in such a heterogeneous network of services. A technology called Service Data Objects (SDO) addresses this problem. SDO offers a format-neutral API that provides a uniform way to access data, regardless of how it is physically stored. By using SDO, the solution developer will not pollute a business application with code to handle diverse choices of data access, such as JDBC Result Sets, JCA records, DOM, JAXB, and EJB entities.
SDO supports a disconnected style of data access and can record a summary based on any changes made to data objects. SDO’s ability to maintain a summary of the changes made allows data transfers to include only the portion of data that has changed, therefore improving environments where bandwidth is a constraint. The change summary information can be used to resolve data access conflicts and concurrency issues.
SDO supplies a powerful yet simple programming model for data with first class support for XML and the ability to automatically persist data via the use of a Data Access Service (DAS). A DAS allows the data to be stored or retrieved from a relational database or another repository, and helps to link the SDO models to enterprise data storage.
The second article, called What is SCA?, has a good one line explanation of Service Component Architecture:
Service Component Architecture provides a concise and flexible model for describing and developing SOA applications and addresses the strategic requirements demanded by agile IT environments. The SCA programming model focuses on describing components and the way that they’re assembled together. It’s inclusive of existing technologies with a primary goal of operating well as an addition to existing heterogeneous environments.
It’s interesting that the initial ‘how to’ and ‘what is’ articles are now starting to appear more regularly in technical journals. This is probably because the specifications are moving from planning phase to the real world implementation phase.
If the composites are doing most of the processing, and it’s really a center-tier process abstracting remote services, than it makes sense to collocate the data as close to the data processing as possible. This done for both manageability, reliability, and for performance.
This makes sense for complex composite applications (not that there’s no mention of Service Component Architecture), however, the article falls apart when it continues and uses locking database tables as the argument.
Integrity will also become less of an issue when leveraging this type of center-tier persistence. No need to lock a dozen or so tables when you can simply lock one.