Logical Versus Physical Database Modeling

After all business requirements have been gathered for a proposed database, they must be modeled. Models are created to visually represent the proposed database so that business requirements can easily be associated with database objects to ensure that all requirements have been completely and accurately gathered. Different types of diagrams are typically produced to illustrate the business processes, rules, entities, and organizational units that have been identified. These diagrams often include entity relationship diagrams, process flow diagrams, and server model diagrams. An entity relationship diagram (ERD) represents the entities, or groups of information, and their relationships maintained for a business. Process flow diagrams represent business processes and the flow of data between different processes and entities that have been defined. Server model diagrams represent a detailed picture of the database as being transformed from the business model into a relational database with tables, columns, and constraints. Basically, data modeling serves as a link between business needs and system requirements.

Two types of data modeling are as follows:

* Logical modeling
* Physical modeling

If you are going to be working with databases, then it is important to understand the difference between logical and physical modeling, and how they relate to one another. Logical and physical modeling are described in more detail in the following subsections.

Logical Modeling

Logical modeling deals with gathering business requirements and converting those requirements into a model. The logical model revolves around the needs of the business, not the database, although the needs of the business are used to establish the needs of the database. Logical modeling involves gathering information about business processes, business entities (categories of data), and organizational units. After this information is gathered, diagrams and reports are produced including entity relationship diagrams, business process diagrams, and eventually process flow diagrams. The diagrams produced should show the processes and data that exists, as well as the relationships between business processes and data. Logical modeling should accurately render a visual representation of the activities and data relevant to a particular business.
Logical modeling affects not only the direction of database design, but also indirectly affects the performance and administration of an implemented database. When time is invested performing logical modeling, more options become available for planning the design of the physical database.

The diagrams and documentation generated during logical modeling is used to determine whether the requirements of the business have been completely gathered. Management, developers, and end users alike review these diagrams and documentation to determine if more work is required before physical modeling commences.

Typical deliverables of logical modeling include

* Entity relationship diagrams
An Entity Relationship Diagram is also referred to as an analysis ERD. The point of the initial ERD is to provide the development team with a picture of the different categories of data for the business, as well as how these categories of data are related to one another.
* Business process diagrams
The process model illustrates all the parent and child processes that are performed by individuals within a company. The process model gives the development team an idea of how data moves within the organization. Because process models illustrate the activities of individuals in the company, the process model can be used to determine how a database application interface is design.
* User feedback documentation

Physical Modeling

Physical modeling involves the actual design of a database according to the requirements that were established during logical modeling. Logical modeling mainly involves gathering the requirements of the business, with the latter part of logical modeling directed toward the goals and requirements of the database. Physical modeling deals with the conversion of the logical, or business model, into a relational database model. When physical modeling occurs, objects are being defined at the schema level. A schema is a group of related objects in a database. A database design effort is normally associated with one schema.

During physical modeling, objects such as tables and columns are created based on entities and attributes that were defined during logical modeling. Constraints are also defined, including primary keys, foreign keys, other unique keys, and check constraints. Views can be created from database tables to summarize data or to simply provide the user with another perspective of certain data. Other objects such as indexes and snapshots can also be defined during physical modeling. Physical modeling is when all the pieces come together to complete the process of defining a database for a business.

Physical modeling is database software specific, meaning that the objects defined during physical modeling can vary depending on the relational database software being used. For example, most relational database systems have variations with the way data types are represented and the way data is stored, although basic data types are conceptually the same among different implementations. Additionally, some database systems have objects that are not available in other database systems.
Implementation of the Physical Model
The implementation of the physical model is dependent on the hardware and software being used by the company. The hardware can determine what type of software can be used because software is normally developed according to common hardware and operating system platforms. Some database software might only be available for Windows NT systems, whereas other software products such as Oracle are available on a wider range of operating system platforms, such as UNIX. The available hardware is also important during the implementation of the physical model because data is physically distributed onto one or more physical disk drives. Normally, the more physical drives available, the better the performance of the database after the implementation. Some software products now are Java-based and can run on virtually any platform. Typically, the decisions to use particular hardware, operating system platforms, and database software are made in conjunction with one another.

Typical deliverables of physical modeling include the following:

* Server model diagrams
The server model diagram shows tables, columns, and relationships within a database.
* User feedback documentation
Database design documentation


Understanding the difference between logical and physical modeling will help you build better organized and more effective database systems. This article described both of these models.

Continuous Integration

During last few months I got chances to test some automation tools for my build and release process after I was made “Automation Evangelist” for my group ( ;o) that I never was). These tools have exceptionally helped us to automate the build and release effort. My team has seen a considerable increase in quality.

Continuous Integration is one of the buzzwords in the list of Xtreme Programming best practices.”Continuous Integration” means that everyone on the team integrates their changes back into the source repository frequently, verifying that the changes didn’t break anything. These days, most people associate Continuous Integration to be highly automated as well, i.e. having an automatic build system in place to continuously verify that the code in the repository compiles and passes all its tests. That is why we are using CruiseControl.

In the next few days I will be writing about some of these tools like ANT and CrusieControl.

content from http://prabhakars.blogspot.com/atom.xml

Extreme Programming Methodology

Extreme programming is a recent methodology, evolved in 1996 when Kent Beck, a
software professional began an engagement with Daimler Chrysler using new concepts
in product development. The result was the Extreme Programming (XP) methodology.
The Extreme programming (XP) project methodology is a popular methodology amongst
product development organizations since it professes simplicity of code, early testing
and frequent review. This enables faster feedback and shorter development cycles. It
emphasizes customer satisfaction.
Features of the XP model

  • It empowers the project team to confidently respond to changing customer requirements even late in the life cycle.
  • Managers, customers, developers and other stakeholders are all part of a team dedicated to delivering a quality software product.
  • The model requires XP practitioners to communicate regularly with their customers thus providing frequent updates on project status and correcting wrong paths.
  • It encourages programmers to keep the design simple and clean.
  • Since testing happens from day one, feedback arrives almost immediately thusgiving ample time to implement changes.

Applicability of the XP model

  • Projects where the teams have identified the risks, quantified them and have assessed them to be of significant in nature. Projects, which have dynamically changing requirements, too fit into this model.
  • Software product companies, who receive feedback from the customer on a continuous basis, have to be prepared to address the inherent risk involved. The XP model scores over the other traditional models of waterfall and iterativedevelopment in such circumstances.
  • The model can be applied for team sizes between 2 and 12 though larger projects of 30 resources have reported success. The model requires an extended development team, which apart from the developers includes managers and the customer as well.

Rohit Prabhakar

content from http://prabhakars.blogspot.com/atom.xml

Spring Framework- Digest

As I was going through some articles on the web regarding the “Spring framework”, following is what I thought can be a digest for what is Spring framework and its benefits.

Spring is an open source framework created to address the complexity of enterprise application development. One of the chief advantages of the Spring framework is its layered architecture, which allows you to be selective about which of its components you use while also providing a cohesive framework for J2EE application development.
The Spring framework
The Spring framework is a layered architecture consisting of seven well-defined modules. The Spring modules are built on top of the core container, which defines how beans are created, configured, and managed, as shown in Figure 1.

Figure 1. The seven modules of the Spring framework

Each of the modules (or components) that comprise the Spring framework can stand on its own or be implemented jointly with one or more of the others. The functionality of each component is as follows:
The core container: The core container provides the essential functionality of the Spring framework. A primary component of the core container is the BeanFactory, an implementation of the Factory pattern. The BeanFactory applies the Inversion of Control (IOC) pattern to separate an application’s configuration and dependency specification from the actual application code.
Spring context: The Spring context is a configuration file that provides context information to the Spring framework. The Spring context includes enterprise services such as JNDI, EJB, e-mail, internalization, validation, and scheduling functionality.
Spring AOP: The Spring AOP module integrates aspect-oriented programming functionality directly into the Spring framework, through its configuration management feature. As a result you can easily AOP-enable any object managed by the Spring framework. The Spring AOP module provides transaction management services for objects in any Spring-based application. With Spring AOP you can incorporate declarative transaction management into your applications without relying on EJB components.
Spring DAO: The Spring JDBC DAO abstraction layer offers a meaningful exception hierarchy for managing the exception handling and error messages thrown by different database vendors. The exception hierarchy simplifies error handling and greatly reduces the amount of exception code you need to write, such as opening and closing connections. Spring DAO’s JDBC-oriented exceptions comply to its generic DAO exception hierarchy.
Spring ORM: The Spring framework plugs into several ORM frameworks to provide its Object Relational tool, including JDO, Hibernate, and iBatis SQL Maps. All of these comply to Spring’s generic transaction and DAO exception hierarchies.
Spring Web module: The Web context module builds on top of the application context module, providing contexts for Web-based applications. As a result, the Spring framework supports integration with Jakarta Struts. The Web module also eases the tasks of handling multi-part requests and binding request parameters to domain objects.
Spring MVC framework: The MVC framework is a full-featured MVC implementation for building Web applications. The MVC framework is highly configurable via strategy interfaces and accommodates numerous view technologies including JSP, Velocity, Tiles, iText, and POI.
Spring framework functionality can be used in any J2EE server and most of it also is adaptable to non-managed environments. A central focus of Spring is to allow for reusable business and data-access objects that are not tied to specific J2EE services. Such objects can be reused across J2EE environments (Web or EJB), standalone applications, test environments, and so on, without any hassle.
The basic concept of the Inversion of Control pattern (also known as dependency injection) is that you do not create your objects but describe how they should be created. You don’t directly connect your components and services together in code but describe which services are needed by which components in a configuration file. A container (in the case of the Spring framework, the IOC container) is then responsible for hooking it all up.
In a typical IOC scenario, the container creates all the objects, wires them together by setting the necessary properties, and determines when methods will be invoked. The Spring framework uses the Type 2 and Type 3 implementations for its IOC container.
Aspect-oriented programming
Aspect-oriented programming, or AOP, is a programming technique that allows programmers to modularize crosscutting concerns, or behavior that cuts across the typical divisions of responsibility, such as logging and transaction management. The core construct of AOP is the aspect, which encapsulates behaviors affecting multiple classes into reusable modules.
AOP and IOC are complementary technologies in that both apply a modular approach to complex problems in enterprise application development. In a typical object-oriented development approach you might implement logging functionality by putting logger statements in all your methods and Java classes. In an AOP approach you would instead modularize the logging services and apply them declaratively to the components that required logging. The advantage, of course, is that the Java class doesn’t need to know about the existence of the logging service or concern itself with any related code. As a result, application code written using Spring AOP is loosely coupled.
AOP functionality is fully integrated into the Spring context for transaction management, logging, and various other features.

BENEFITS of using Spring MVC

1. Spring provides a very clean division between controllers, JavaBean models, and views.
2. Spring’s MVC is very flexible. Unlike Struts, which forces your Action and Form objects into concrete inheritance (thus taking away your single shot at concrete inheritance in Java), Spring MVC is entirely based on interfaces. Furthermore, just about every part of the Spring MVC framework is configurable via plugging in your own interface. Of course we also provide convenience classes as an implementation option.
3. Spring, like WebWork, provides interceptors as well as controllers, making it easy to factor out behavior common to the handling of many requests.
4. Spring MVC is truly view-agnostic. You don’t get pushed to use JSP if you don’t want to; you can use Velocity, XLST or other view technologies. If you want to use a custom view mechanism – for example, your own templating language – you can easily implement the Spring View interface to integrate it.
5. Spring Controllers are configured via IoC like any other objects. This makes them easy to test, and beautifully integrated with other objects managed by Spring.
6. Spring MVC web tiers are typically easier to test than Struts web tiers, due to the avoidance of forced concrete inheritance and explicit dependence of controllers on the dispatcher servlet.
7. The web tier becomes a thin layer on top of a business object layer. This encourages good practice. Struts and other dedicated web frameworks leave you on your own in implementing your business objects; Spring provides an integrated framework for all tiers of your application.
Spring vs. Struts? for complete article read.. ()
Spring doesn’t compete against Struts at least, not directly. You can use Struts for your front-end and have Spring hold your model. However, Spring can stand on Struts shoulders and has implemented the front-end, and in the process, solved some of Struts thorny bits.
My biggest complaint with Struts is its tight coupling with JSPs, and Spring allows support for JSP, Velocity and FreeMarker right out of the box. Now I can create controllers and even form validating beans without the need for complicated, difficult-to-maintain JSP code.
I realize the benefits listed above are good reasons to go with Spring MVC over Struts but how popular is Spring MVC and how much is it actually being used out there is still to be seen?
One article that says no to springs and is in favor of struts can be found at
Please revert back with your comments and queries :o)

Rohit Prabhakar

Rohit Prabhakar
Rohit Prabhakar

Rohit is a Digital Marketing Transformation Sherpa, Chief Marketing Technologist.

Rohit has extensive experience in Technology with strong passion for Marketing & Sales. Rohit lives on the cross-roads of Tech, Marketing & Sales. As per Rohit he thinks like a Marketer, plan like a Techie and execute like a Business (Sales) leader. Because he has a unique combination of over 15 years of expereince in Technology, Sales and Marketing.

Currently Rohit is leading the digital marketing efforts and playing the role of Chief Marketing Technologist for World’s leader on Healthcare. In his previous roles he has been recognized for leadership in Digital Marketing & Web Strategy, Marketing Technologies & Marketing Automation, Product Management & Marketing, Sales and Delivery. With his unique skills Rohit is very strong in the leading business in Marketing technologies, Business Technologies, Enterprise Technologies, Digital Strategy, Product Marketing, Business Development etc. He has an impressive track record in Product Management, Client Relationship, Portfolio, Program & Project management for both startups and Fortune 500 organizations

Rohit has directed cross-functional teams and organizations. Owned P&L for all the engagements. International background and known for expertise in offshore delivery model and outsourcing. He also has strong communication, collaboration and presentation skills at all levels including CXO.


Rohit is Head of Digital Marketing Strategy and Chief Marketing Technologist for McKesson

Rohit’s responsibilities include:

  • Responsible for setting the strategic direction for mckesson.com and other digital properties. This includes creating the strategic direction, socializing the strategic direction both within and with our business units. This role is also responsible for ensuring that our websites meet the highest of usability standards and continue to provide tools and services that are easy for our members to use.
  • Chief Marketing Technologist: Lead the global marketing technology landscape and provides thought leadership, vision and strategy to enable McKesson marketing objectives. Collaborate on Marketing Automation with all the McKesson Business Units.
  • Drive both enterprise and tactical marketing technologies & capabilities across web, mobile, social, CRM, eCommerce & Analytics to enable McK’s brands get ahead of the curve and deliver the most optimized and relevant consumer experience.
  • Manage all digital agency relationships.
  • The role is also responsible for researching, understanding and communicating the needs of both our customers and BU throughout the entire organization, influencing project and portfolio decisions for this channel and leading the company’s web strategy and execution.
  • Chief Marketing Technologist: Responsible for implementing mckesson.com technology roadmap and best of breed digital marketing technologies, and leveraging digital channels to dynamically target customers.


Read more at www.linkedin.com/in/rohitprabhakar/