Monday, December 21, 2009

Living with the Enemy

Just imagine two people who hate each other to death, being forced to live in the same house and continue inflicting pain on each other in every possible way. Do you get the picture?  That is what is happening between Oracle and SAP. To Larry Ellison, there is no other enemy more worse than SAP, and for SAP (even though they downplay it)  the number one in their enemy list is Oracle.

When SAP hates Oracle this much, SAP can not stop being major sales channel for Oracle’s database. Oracle  is still the number one database in the SAP customer base. It’s also no secret that many SAP customers would love to stop paying a premium price for a database that is functionally underused by the SAP product line.
But now there is a hope to SAP,  a new reason that SAP would like to get rid of the traditional relationship between its software and database leader Oracle. If SAP can develop its application to be less dependent on disk database, that  would be the first  step to reduce the dependency.
Speed is Money
That was what was proclaimed  by Mr. Plattner in the Sapphire conference in his key note address. He said the new world of in-memory computing is the next in-thing in the enterprise software.  In-memory database system (IMDS) is designed explicitly for real-time applications and for embedded systems such as set-top boxes, telecom equipment, consumer electronics and other connected gear. The in-memory database minimizes RAM and CPU demands and offers unmatched performance, reliability and development flexibility.
Enterprise software companies could learn from the techniques used by gaming software developing where the in-memory database  usage is already making big impact to get the maximum output from the multi-core CPUs. Mr. Plattner did not promise that SAP is developing in-memory concept into SAP product but he made it very clear that it is the way forward.
Oracle Killer
The desire to kill Oracle is not new found for SAP. As early as 2005, Shai Agassi, the then president of product technology group and member of SAP executive board, elaborated about the company’s programs to improve the in-memory capability of the software. In-memory capability is a new way to manage the data. Largest databases in the world are data warehouses, and these databases get the most complicated queries that they need to process as fast as possible. This requires enormous amount of CPU power. The testing ground for SAP’s new database strategy can be found in a product the company released few years back – the BI Accelerator or BIA. Among its many attributes, BIA is a hardware appliance that runs SAP Analytics incredibly quickly. The potential “Oracle killer” part of BIA comes from its in-memory database functionality, which processes database queries in RAM with no need to access disk-based storage — or even have a relational database at all — which means no costly database license, no massive data-storage farm, no expensive DBAs, and so on.
The idea of in-memory query management at SAP is hardly new. Back in the late 1990s, SAP unveiled LiveCache, an in-memory processor for what was then called the Business Warehouse. LiveCache was a little ahead of its time for lots of reasons, starting with the fact that CPU and memory costs were still relatively high for what SAP had in mind. In the end, LiveCache failed to live up to expectations. But it still survived as the in-memory query engine for SAP’s Advanced Planner and Optimizer (APO) supply-chain product.
LiveCache made history in SAP benchmarking, giving an indication of the response times that are possible using an in-memory database engine. Based on SAP’s own benchmarking standard — the SAP Standard Applications Benchmark — SAP’s hardware partners have had a glorious time leapfrogging each other in recent years to see which could achieve the best response times with LiveCache.
So, It is more of the question of when the killer will arrive. Someday soon, we will have a choice to choose between the status quo and new radical database approach. What will you choose if the newer approach is cheaper and faster and effective?
Read More about  IMDS

Monday, December 14, 2009

Service Oriented Computing Platform for Shared Services Model


It is the need of the hour for business entities to revisit their growth plans and to perform consistent checks on their performance and operational utilization. One of the driving factor for the transformation of Application Service Providers (http://blogs.hexaware.com/pitstop/application-service-provider-vs-software-as-a-service-asp-vs-saas.html )to Software as a service is the change in the Information Technology Trends.

 Human Computer Interaction and Interoperable applications seems to be the cloud where the future software investments would be. Let’s look into the evolving service oriented computing platform which answers a lot of the present day business needs.

The Product based companies were quick to incorporate the service orientation in their product designs by mostly following the agile software development methodology whereas the Service based companies were quick to incorporate the shared resource pool to evolve the Shared Services Support.

Initially, the shared services model was spotted within organizations as specialized groups of system and vendor specific administrators and later evolved as ESSA (http://www.hexaware.com/shared-services.htm). The proposed service oriented computing platform for such shared services model would further enhance the customer satisfaction as the service design principles are followed.

Goin by the tag “If resources are not retainable, Let’s retain their knowledge within the organization” – the service oriented computing platform goes one step further from the traditional vertical solutions(CRM,TAM,.) by providing an SOA based solution utilizing the present day horizontal solutions(Livelink ECM ). This Open technology solution is vendor independant and can accomodate the existing applications of Oracle(Peoplesoft, Siebel), Microsoft(sharepoint),SAP,Kronos Workforce Timekeeper,.. by establishing a meta data driven knowledge solution for the shared services resource pool to work on.

Few Key advantages of this service oriented solution for the shared services model are

a.Reusabilty – the reduntant support (technical/Functional) effort is minimized.

b. Service Governance

c. Discoverability

d. Less Time consumption in performing a support activity

e. Knowledge Service Composition – enabling the maintenance of organization wide service data irrespective of their source department or technology.

Like the Peoplesoft Person Model which was based out of SOA, the service oriented solution design for the shared services model would make a huge difference among the corporate cube warriors. Let’s wait for more such innovative solutions to surface soon.

Monday, September 14, 2009

Merge Rows as Columns / Transpose records


Requirement: Converting rows to columns


Customer
Product
Cost
Cust1
P1
10
Cust1
P2
20
Cust1
P3
30
Cust2
ABC
10
Cust2
P2
25
Cust2
Def
10
Customer
Product1
Cost1
Product2
Cost2
Product3
Cost3
Cust1
P1
10
P2
20
P3
30
Cust2
ABC
10
P2
25
Def
10

The above illustration would help in understanding the requirement. We had to merge multiple records into one record based on certain criteria. The design had to be reusable since each dimension within the data mart required this flattening logic.

1. Approach:
The use of aggregator transformation would group the records by a key, but retrieval of the values for a particular column as individual columns is a challenge, hence designed a component ‘Flattener’ based on expression transformation.
Flattener is a reusable component, a mapplet that performs the function of flattening records.
Flattener consists of an Expression and a Filter transformation. The expression is used to club each incoming record based on certain logic. Decision to write the record to target is taken using the Filter transformation.

2. Design:
The mapplet can receive up to five inputs, of the following data types:
i_Col1 (string),  Customer
i_Col2 (string), Product
i_Col3 (decimal), Cost
i_Col4 (decimal) and
i_Col5 (date/time)
Have kept the names generic trying to accept different data types, so that the mapplet can be used in any scenario where there is a need for flattening records.
The mapplet gives out 15×5 sets of output, in the following manner:
o_F1_1 (string), Customer
o_F2_1 (string), Product1
o_F3_1 (decimal), Cost1
o_F4_1 (decimal) and
o_F5_1 (date/time)
o_F1_2 (string), Customer
o_F2_2 (string), Product2
o_F3_2 (decimal), Cost2
o_F4_2 (decimal) and
o_F5_2 (date/time)
… … and so on
The output record is going to have repetitive sets of 5 columns each (Each set would refer to one incoming row). Based on the requirement the number of occurrence of these sets can be increased. The required fields alone can be used / mapped. For the above example we use just 2 strings and one decimal for mapping Customer, Product and Cost.
The mapplet receives records from its parent mapping. The Expression would initially save each incoming value to a variable and compare it with its counterpart that came in earlier and is held in its cache as long as the condition to flatten is satisfied.
Syntax to store current and previous values:
i_Col2 string i
prv_Col2 string v curr_Col2
curr_Col2 string v i_Col2
The condition/logic to flatten records is parameterized and decided before mapping is called thus increasing codes’ scalability. The parameterized logic is passed to the Expression transformation via a Mapplet parameter. The value is used as an expression to perform the evaluation and the result is a flag value either ‘1’ or ‘2’.
Syntax for port – flag
Flag integer v $$Expr_compare
An example for parameterized expression
$$Expr_compare = iif (curr_Col1 = prv_Col1 AND curr_Col2 !=
prv_Col2, 1, iif (curr_Col1 != prv_Col1,2))
A variable port named “rec_count” is incremented, based on the flag.
Syntax for port – rec_count
rec_count integer v iif (flag=2,0, iif (flag=1,rec_count + 1,rec_count))
The expression transformation now uses the value in ports “flag” and “rec_count” to decide the place holder for each incoming input value, i.e. the column in target table where this data would move into ultimately. This process is an iterative one and goes on until the comparison logic ($$Expr_compare) holds good, i.e. until all records get flattened per the logic. An example of the place holder expression is shown below:
v_Field1 data type v iif(flag=2 AND rec_count=0,curr_Col1, v_Field1)
Port “write_flag_1” is set to 1 when the comparison logic fails (meaning flattening is complete). Filter transformation filters out the record once it is completely transposed.
Filter condition:
write_flag_1 integer v iif (flag=2 AND write_flag>1 ,1,0)

3. Outcome:
After developing the code and implementing the same we found it to be a useful utility, so thought of sharing it and would like to hear suggestions from readers on performing the same functionality in a different way. Please do share your views.

Wednesday, September 2, 2009

Process Control / Audit of Workflows in Informatica


1. Process Control – Definition
Process control or Auditing of a workflow in an Informatica is capturing the job information like start time, end time, read count, insert count, update count and delete count. This information is captured and written into table as the workflow executes

2. Structure of Process Control/Audit table
The table structure of process control table is given below,
Table 1: Process Control structure
PROCESS_RUN_IDNumber(p,s)11A unique number used to identify a specific process run.
PROCESS_NMEVarchar2120The name of the process (this column will be populated with the names of the informatica mappings.)
START_TMSTDate19The date/time when the process started.
END_TMSTDate19The date/time when the process ended.
ROW_READ_CNTNumber(p,s)16The number of rows read by the process.
ROW_INSERT_CNTNumber(p,s)16The number of rows inserted by the process.
ROW_UPDATE_CNTNumber(p,s)16The number of rows updated by the process.
ROW_DELETE_CNTNumber(p,s)16The number of rows deleted by the process
ROW_REJECT_CNTNumber(p,s)16The number of rows rejected by the process.
USER_IDVarchar232The etl user identifier associated with the process.
3.  Mapping Logic and Build Steps
The process control flow has two data flows, one is an insert flow and the other is an update flow. The insert flow runs before the main mapping and update flows runs after the main mapping, this option is chosen in “Target Load Plan”. The source for both the flows could be a dummy source which will return one record as output, for example select ‘process’ from dual or select count(1) from Table_A. The following list of mapping variable is to be created,
Table 2: Mapping Parameter and variables
$$PROCESS_ID
$$PROCESS_NAME
$$INSERT_COUNT
$$UPDATE_COUNT
$$DELETE_COUNT
$$REJECT_COUNT
Steps to create Insert flow:
  • 1. Have “select ‘process’ from dual” as Sequel in source qualifier
  • 2. Have a sequence generator to create running process_run_Id ’s
  • 3. In an expression SetVariable ($$PROCESS_RUN_ID,NEXTVAL), $$PROCESS_NAME to o_process_name, a output only field
  • 4. In an expression assign $$SessionStarttime to o_Starttime, an output only field
  • 5. In an expression accept the sequence id from sequence generator
  • 6. Insert into target’ process control table’ with all the above three values
Table 3: Process Control Image after Insert flow

PROCESS_RUN_ID1
PROCESS_NMEVENDOR_DIM_LOAD
START_TMST8/23/2009 12:23
END_TMST
ROW_READ_CNT
ROW_INSERT_CNT
ROW_UPDATE_CNT
ROW_DELETE_CNT
ROW_REJECT_CNT
USER_IDINFA8USER
Steps in main mapping,
  • 1. After the source qualifier, increment the read count in a variable (v_read_count) for each record been read in an expression and SetMaxVariable ($$READ_COUNT,v_read_count)
  • 2. Before the update strategy of target instances, do the same for Insert/Update/Delete counts; all the variables are now set with all their respective counts
Steps to create Update flow:
  • 1. Have “select ‘process’ from dual” as Sequel in source qualifier
  • 2. Use SetMaxvariable to get the process_run_id created in insert flow
  • 3. In an expression assign $$INSERT_COUNT to an o_insert_count, a output only field, assign all the counts in the same way
  • 4. In an expression assign $$SessionEndtime to o_Endtime, an output only field
  • 5. Update the target ‘Process Control Table’ with all the above three values where process_run_id equals the process_run_id generated in Insert flow
Table 4: Process Control Image after Update flow
PROCESS_RUN_ID1
PROCESS_NMEVENDOR_DIM_LOAD
START_TMST8/23/2009 12:23
END_TMST8/23/2009 12:30
ROW_READ_CNT1000
ROW_INSERT_CNT900
ROW_UPDATE_CNT60
ROW_DELETE_CNT40
ROW_REJECT_CNT0
USER_IDINFA8USER

4. Merits over Informatica Metadata
This information is also available in Informatica metadata, however maintaining this within our system has following benefits,
  • Need not write complex query to bring in the data from metadata tables
  • Job names need not be mapping names and can be user friendly names
  • Insert/Delete/Update counts of all as well as individual target can be audited
  • This audit information can be maintained outside the metadata security level and can be used by other mappings in their transformations
  • Can be used by mappings that build parameter files
  • Can be used by mappings that govern data volume
  • Can be used by Production support to find out the quick status of load

Friday, June 26, 2009

Delivering – Probe SAP

I acted as a midwife assisting to bring ‘Innovation’ mother’s baby – ‘Probe SAP’ to the world. I was the delivery coach and saw every step of the gestation period and how much the mother takes care to bring the baby to the world. It was an uphill task but as with all of our mothers, ‘Innovation’ pulled it through. Kudos to mother Innovation.
Now the midwife’s view. I had the privilege to introduce the baby to the world… What a big expectation it was from different corners for this baby! Fortunately in this case, it was possible to tailor make the baby as per the world’s demand. Even with that flexibility, it was very difficult to meet the expectations. With the privilege also comes the responsibility to take the criticism… ‘I thought the baby would look like Shahrukh Khan, I thought the baby can fly like Superman, I thought the baby would run like Carl Lewis on the first day’ etc. these were the kind of expectations thrown at me during the baby introduction but I am happy the way, we as delivery team worked with expectations and explained to the public what the baby can do.
What the baby can do anyway …
It can analyze any SAP environment and list down all the customizations and come up with the impact of upgrade. The customizations could be anything, any type of objects – Probe recognizes 54 types of objects. The baby has two type of personalities. Inventizer – the uncommitted guy who does not look into the details but interested in getting an overall idea about the person he deals with. Inventizer will give a complete picture of an environment to give anyone an idea on what it takes to maintain or upgrade the environment. The other split personality is Analyzer – the finicky girl who wants to go into every detail and find fault in every little thing. Analyzer goes into the level of identifying the line number where the incompatible element exist to carry out the upgrade. With these, the baby has for sure taken the big leap into the world. Now it requires constant nurturing to turn it into a grown up person to tackle anything in the real world of SAP upgrade and maintenance.
Head mother of Hexaware’s Innovation team, Innovation lead Immanuel has done a great work in directing the team in this delivery. But for his passion and focus in the demanding environment with limited resources, it would be even more difficult child bearing. I do not want to mention any names fearing I will definitely miss out someone. Job well done to the whole innovation team of mothers involved in this.
The baby has taken the first step to run as Carl Lewis!

Thursday, June 18, 2009

Drools for Airlines

Enterprise Applications embraces a lot of business rules. These business rules are often critical areas in decision making processes that attribute to changes in the behavior of an enterprise transaction. While these business rules should be an integral part of the Enterprise Solution transactions, they should be easy to use, easily adaptable to modifications, should be segregated from the programming logics so as to be easily maintainable.

Most often, one of the architectural mistakes that we do is to embed these business rules as a part of the software application code. This methodology poses some severe disadvantages.
  • When new rules are enforced, it becomes all the more difficult to add them. The software program has to be modified to add this rule thereby causing a downtime of the application.
  • Embedding business rules with software program affords less maintainability.
  • Access to these critical business rules for business experts is minimal since they are bundled with the core application logic.
  • Efficient use of a business rules engine or a management system is inherently over looked.
  • Reliability of IT departments on the business rule modification.
  • Reduced or no usage of efficient algorithms for automated business decision process.
There are quite a number of business rule engines (management systems) available in the market. One of those that is prominent and noteworthy is the Drools (aka) JBoss Rules ™ that provides efficient way to handle business rules within an enterprise transaction. There are quite a number of advantages of Drools that makes it attractive to the others such as but not limited to:
  • Drools cater to many programming languages such as Java, Python, and Groovy etc.
  • Drools can run on .NET
  • Drools are Declarative in nature allowing programmers to use it at ease.
  • Drools are flexible enough to be used by means of Domain Specific Language (DSL) to address the semantics of problem domain.
  • Drools can be configured via decision tables (excel tables)
  • Drools is based on forward chaining inference mechanism, which means that changes to inference during rule executions dynamically change the output behavior. What this means is that the data is iterated through until the pre defined goal is reached unlike the backward chaining approach.
Drools employ the famous pattern matching RETE algorithm. When known facts are asserted into the knowledge base, the implementation fires the rules (defined by the rule set) in a sequenced manner until the end of the rule is reached and then looping back to the first if necessary.

Drools for Airline

Airlines reservation system is mission critical system and is characterized by deployment across the world requiring split second response for any input. I am considering a small business rule in the airlines passenger services space to explain the importance of the drools at runtime. A passenger list display for a flight might be queried with different parameters to search for. I am considering that a query has been supplied to match the requested origin and destination and a specific booking status of a passenger to match for display. The rule of thumb is to display only the confirmed passengers on board. Further the rule should check if the segment being queries is open for display.
Below is a snapshot of how the drools file would look like (denoted by *.drl extension)
Drools for Airlines

Well, one might argue that the above rule might be written through a java (or any program in that case) code. The code would not look complex but would not look simple either. The rule explained above is the simplest in its form. Resolving the above simpler logic into an application code itself would amount to redundant iterations, in efficient loops etc. Notice the line where the status of the FlightLeg is being checked for from the list of Flight Legs and the line where the passengers with confirmed status are collected. It is just a single line that does the work! Furthermore any one can infer the outcome of this decision, not mandating that java knowledge is absolutely essential.

There are flexible expression language extensions employed by drools which cannot be programmed efficiently when the business logic is embedded into the application code. Furthermore drools bring in an abstraction to employing the business rules thus keeping the application code more readable and more manageable. Above all the power of drools is realized when there are complex decision making processes during forward chaining inference, especially when a decision outcome should have inherent effect on the other.

The Rule Explained

As can be seen from the above screen shot, the rule file contains a package declaration at the top followed by a list of imports needed for the rule (similar to java environments). The next section is a set of rule definitions. A rule is recognized by a name followed by a set of conditions to check (denoted by the when keyword) and a set of actions to take as the conditions are satisfied (denoted by the then keyword). A rule file can define any number of rules to be fired and optionally a sequence in the firing manner (denoted by the salience keyword). You should probably use this salience attribute if the firing of one rule has a consequence on the other. The no-loop keyword in the second rule as can be seen from the screen shot is used to instruct the rules engine to skip looping.

Drools also provides a set of pre defined enriched functions to work with collections and the like (as can be seen from the above rule snapshot like the collect function), the retract function to evict objects from the working memory when they are no longer needed.

Invoking the rule

Invoking the drools rule file that we defined is simple. To start with,…… the rule we defined has to be built into a form of package. This is achieved by using the KnowledgeBuilder.

KnowledgeBuilder kBuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();

A Knowledge builder in drools is responsible for taking an input rule definition file (the individual rule files, decision tables or the DSL files) and turning them into what is called a KnowledgePackage which the KnowledgeBase can consume. Typically this is achieved in the following way.

kBuilder.add(ResourceFactory.newClassPathResource(“person.drl”), 
ResourceType.DRL);

KnowledgeBuilder can report errors if the input rules definitions or decision tables cannot be compiled to knowledge packages. As a better programming practice it is always good to check if the KnowledgeBuilder has errors before adding resources.

When the Knowledge Packages are built, it is then ready to be consumed by the KnowledgeBase. This is accomplished in the following way,

KnowledgeBase kBase = KnowledgeBaseFactory.newKnowledgeBase();

kBase.addKnowledgePackages(kBuilder.getKnowledgePackages());

StatefulKnowledgeSession session = kBase.newStatefulKnowledgeSession();

//insert the objects that you want the rules to be fired for!
session.insert($object1);

session.insert($object2);

session.insert($object3);

//fire all the rules……
session.fireAllRules();

//dispose of this session since it is stateful.
session.dispose();

Wasn’t it simple? As can be inferred, writing business rules using Drools simplifies the task of embedding the rules into the application code that could otherwise result in redundant iterations, complex logics during dynamic decision inference etc. All of those are done behind the scenes through Drools. All the more Drools employ efficient algorithms for best performance which would otherwise have to be designed (or compromised which is what happens most of the time). Above all the rules can be written as decision tables, XML files which makes it more attractive.
Let us consider a small (rather ‘naïve’ to be precise) example to see how to employ Drools at run time. I am assuming a face book class where a single face book has a collection of persons and each person has an address. I am considering making use of drools for these 2 business rules:
  • No 2 persons in the face book instance can share the same email address.
  • When the address type of the person is not specified it should be defaulted to ‘Residence’
A  downloadable source code (a small stand alone example to start with) is provided for reference (as an eclipse project). The source code uses the drools binary distribution  (v5.x, http://www.jboss.org/drools), XStream (another impressive library to convert your java objects to XML without the need for schema definition Xor ML annotations, can be found at http://xstream.codehaus.org) and Apache Log4J http://logging.apache.org)

Wednesday, June 3, 2009

Databases and Marketing


Really exciting times to be in Marketing. Well, personally for me they are (the geek in me just loves how databases and analytics have become so critical to marketing! :) )
It is incredible how central data is becoming to the art and science of marketing. Infact marketing nowadays is so data driven that it is more science than art.
And I am not referring to ROI and marketing measurement data. Usage of that data is to be able to speak the same language as the CFO and CEO; to get a seat at the table in the executive suite. Something that the marketing organization has had to learn to meet the CFO’s standards.
I am referring to a data driven approach that has been driven by the marketing department itself. And this has been driven by the marketing department’s desire to run programs that are not gambles. Campaigns that are designed with the customers and their behaviour in mind and therefore hit the target more often than not. And that has brought us today to a time where databases and analytics are critical for marketing to succeed.
Not being familar with database management and analytics is not an option anymore.

Monday, May 25, 2009

The Power of Now – Paradigm shift in Digital Marketing


For the initiated, and those who have wet their beaks in Digital Marketing, today’s times are really exciting. And yet, there is constant pressure for marketing strategies to evolve constantly. There is a big shift happening in the way information is served and utilized.
Information is now all about “newness” and “nowness”. It is dynamic, constantly reinventing itself to stay in the rat race for attention. It is just not about standalone pages or browsers anymore. It’s more about streaming content, about relevance in constant churn, about adding enhanced value to your user segment, all of these building a conversation for a symbiotic relationship with a user segment.
Streams of Data, Not just static pools
Much water has flown under the internet highway since the advent of Really Simple Syndication (RSS). Social media networks have taken center stage and dynamic information is served and lapped up in myriad ways. The possibilities are endless – A chirpy tweet about an event real-time, a face-off with a corporate ambassador on face book, a stumble upon on your new proprietary IP micro site, a techno-rati-ing of your corporate blog, rank boost on a Wikipedia link etc.
Amid all this noise, it is quite important to make sense of the signals. Especially for B2B marketing which has to make sure that this shift in dynamics is effectively leveraged to generate awareness, drive leads and nurture relationships.
Swimming with the tide of dynamic information
The moot point here is whether this accent on immediacy of information holds intrinsic value for B2B marketing. Yes of course. Ignoring this new shift in information distribution would be a no brainer. The key here is to use these social streams to row people to your site and ensure the site is sticky enough for him to decide in your favor.
Social streams can help build reputation and trust that can help connect with your customers. In lots of ways it gives B2B marketers an easy way to participate in conversations that are about them. It is still early to tell though. Plunging into the stream would be the easy way to find out though.
This paradigm shift has come to stay and technology will be its prime driver. With Twitter going one up on Google on real time search stakes (an earthquake this minute appears faster in twitter than on Google) this information shift could forever change the way even a search result looks.

Monday, May 18, 2009

Digital Media Marketing


The power of digital media (usually referred to online media) has inspired me a lot through these years or rather never ceases to amaze me!
In today’s discriminating and techno-savvy world, with almost every business accomplished through the internet, a company’s presence over the web has become mandatory or should I say ‘a crucial factor’?
Here are some of my observations how online channels are used by businesses today.

Realize
Based on my experience in this field (I’m not a techie thoughJ), I have found that very few companies have exploited digital media through their websites. I think every company needs to realize the importance of their presence on the web with the ability to act fast, engage a lot of creativity and utilize resources capable of delivering innovative ideas keeping in mind their budgetary constraints.
Web metrics play a crucial role in determining the type of content that can attract and hold a customer’s interest, make them stay on the same website for a long time and make them come back for more – ideally meeting the user demand. Today, static images and plain text are replaced with streaming media and much more interactive applications providing the customer with a personalized experience. I feel many companies are struggling to develop their web presence in making it into more productive, virtual and a truly aggressive weapon.
Though a company has a string of issues involved in managing their web presence effectively, they also have to emphasize on the digital media’s potential in transforming the entire market scenario thereby ensuring greater ROI.
I would also add that opportunities are in abundance for any company willing to try out on these lines!
React
As I was digging through an article on digital media, a particular sentence caught my attention which in fact is apparently inevitable. It says, ‘A website has 10 seconds to draw a visitor or the person will “go back to Google”’! Well, are we talking here about “Love at first site”? Yes we are!
In this fast changing world, one has to face the reality that none of the above turns into a reality so easily. Companies have to think from the customers’ perspective, analyze and optimize his or her experience on the website by tuning their infrastructure, improve data center capability and effectively manage content delivery, thereby nurturing relationships.
It’s a do-or-die situation where IT organizations especially need to focus and react immediately on enhancing the digital business in line with their business objectives.
Reap
There’s nothing compared to reaping enhanced ROI through comprehensive digital media marketing in today’s world which is going through a major financial recession. Though I might daresay that it would be a silver lining amidst the gloom, a well defined approach with a strategic network, leveraged intelligence and the intent of providing a rich customer experience ensures improved business responsiveness and better cost control.
Well I do want to write more on the other areas of digital media in the forth coming posts. I would honestly need your comments and suggestions.