November 18, 2013

esProc Helps Database Realize Real-time Computation of Big Data

The Big Data Real-time Application is a scenario to return the computation and analysis results in real time even if there are huge amount of data. This is an emerging demand on database applications in recent years.

In the past, because there are not so many data, the computation is simple, and few parallelisms, the pressure on the database is not great. A high-end or middle-range database server or cluster can allocate enough resource to meet the demand. Moreover, in order to rapidly and parallelly access to the current business data and the historic data, users also tend to arrange a same database server for both the query analysis system and the production system. By this way, the database cost can be lowered, the data management streamlined, and the parallelism ensured to some extent. We are in the prime time of database real-time application development.

In recent years, due to the data explosion, and the more and more diversified and complex application, new changes occur to the database system. The obvious change is that the data is growing at an accelerated pace with ever higher and higher volume. Applications are increasingly complex, and the number of concurrent accesses makes no exception. In this time of big data, the database is under increasing pressure, posing a serious challenge to the real-time application.

The first challenge is the real-timeness. With the heavy workload on the database, the database performance drops dramatically, the response is sluggish, and user experience is going from bad to worse quickly. The normal operation of the critical business system has been affected seriously. The real-time application has actually become the half real-time.

The second challenge is the cost. In order to alleviate the performance pressure, users have to upgrade the database. The database server is expensive, so are the storage media and user license agreement. Most databases require additional charges on the number of CPUs, cluster nodes, and size of storage space. Due to the constant increase of data volume and pressure on database, such upgrade will be done at intervals.

The third challenge is the database application. The increasing pressure on database can seriously affect the core business application. Users would have to off-load the historic data from the database. Two groups of database servers thus come into being: one group for storing the historical data, and the other group for storing the core business data. As we know, the native cross-database query ability of databases are quite weak, and the performance is very low. To deliver the latest and promptest analysis result on time, applications must perform the cross-database query on the data from both groups of databases. The application programing would be getting ever more complex.

The forth challenge is the database management. In order to deliver the latest and promptest analysis result on time, and avoid the complex and inefficient cross-database programming, most users choose to accept the management cost and difficulty increase - timely update the historic library with the latest data from the business library. The advanced edition of database will usually provide the similar subscription & distribution or data duplicate functions.

The real-time big data application is hard to progress when beset with these four challenges.

How to guarantee the parallelism of the big data application? How to reduce the database cost while ensuring the real-timeness? How to implement the cross-database query easily? How to reduce the management cost and difficulty? This is the one of hottest topics being discussed among the CIOs or CTOs.

esProc is a good remedy to this stubborn headache. It is the database middleware with the complete computational capability, offering  the support for the computing no matter in external storage, across databases, or parallelly. The combination of database and esProc can deliver enough capability to solve the four challenges to big data applications.

http://www.raqsoft.com/product-esproc















esProc supports for the computation over files from external storage and the HDFS. This is to say, you can store a great volume of historical data in several cheap hard disks of average PCs, and leave them to esProc to handle. By comparison, database alone can only store and manage the current core business data. The goal of cutting cost and diverting computational load is thus achieved.

esProc supports the parallel computing, so that the computational pressure can be averted to several cheap node machines when there are heavy workload and a great many of parallel and sudden access requests. Its real-timeness is equal or even superior to that of the high-end database.

esProc offers the complete computational capability especially for the complex data computing. Even it alone can handle those applications involving the complex business logics. What's even better, esProc can do a better job when working with the database. It supports the computations over data from multiple data sources, including various structural data, non-structural data, database data, local files, the big data files in the HDFS, and the distributed databases. esProc can provide a unified JDBC interface to the application at upper level. Thus the coupling difficulty between big data and traditional databases is reduced, the limitation on the single-source report removed, and the difficulty of the big data application reduced.

With the seamless support for the combined computation over files stored in external storage and the database data, users no longer need the complex and expensive data synchronization technology. The database only focus on the current data and core business applications, while esProc enable users to access both the historic data in external storage and the current business data in database. By doing so, the latest and promptest analysis result can be delivered on time.

The cross-database computation and external storage computation capability of esProc can ensure the real-time query while alleviating the pressure on database. Under the assistance of esProc, the big data real-time application can be implemented efficiently at relatively low cost.

November 13, 2013

Hadoop + esProc Help You Replace IOE

What is IOE? I=IBM, O=Oracle, and E=EMC. They represent the typical high-end database and data warehouse architecture. The high-end servers include HP, IBM, and Fujitsu, the high-end database software includes Teradata, Oracle, Greenplum; the high-end storages include EMC, Violin, and Fusion-io.

In the past, such typical high performance database architecture is the preference of large and middle sized organizations. They can run stably with superior performance, and became popular when the informatization degree was not so high and the enterprise application was simple. With the explosive data growth and the nowadays diversified and complex enterprise applications, most enterprises have gradually realized that they should replacing IOE, and quite a few of them have successfully implemented their road map to cancel the high-end database totally, including Intel, Alibaba, Amazon, eBay, Yahoo, and Facebook.

The data explosion has brought about sharp increase in the storage capacity demand, and the diversified and complex applications pose the challenge to meet the fast-growing computation pressure and parallel access requests. The only solution is to upgrade ever more frequently. More and more enterprise managements get to feel the pressure of the great cost to upgrade IOE. More often than not, enterprises still suffer from the slow response and high workloads even if they've invested heavily. That is why these enterprises are determined to replace IOE.

Hadoop is one of the IOE solutions on which the enterprise management have pinned great hope.

It supports the cheap desktop hard disk as a replacement to high-end storage media of IOE.

Its HDFS file system can replace the disk cabinet of IOE, ensuring the secure data redundancy.

It supports the cheap PC to replace the high-end database server.

It is the open source software, not incurring any cost on additional CPUs, storage capacities, and user licenses.

With the support for parallel computing, the inexpensive scale-out can be implemented, and the storage pressure can be averted to multiple inexpensive PCs at less acquisition and management cost, so as to have greater storage capacity, higher computing performance, and a number of paralleling processes far more than that of IOE. That's why Hadoop is highly anticipated.

However, IOE still has an advantage over Hadoop for its great data computing capability. The data computing is the most important software function for the modern enterprise data center. Nowadays, it is normal to find some data computing involving the complex business logics, in particular the applications of enterprise decision-making, procedure optimizing, performance benchmarking, time control, and cost management. However, Hadoop alone cannot replace IOE. As a matter of facts, those enterprises of high-profile champions for replacing IOE have to partly keep the IOE. With the drawback of insufficient computing capability, Hadoop can only be used to compute the simple ETL, data storage and locating, and is awkward to handle the truly massive business data computation.

To replace IOE, we need to have the computational capability no weaker than the enterprise-level database and seamlessly incorporating this capability to Hadoop to give full play to the advantageous middleware of Hadoop. esProc is just the choice to meet this demand.

esProc is a parallel computing framework software which is built with pure Java and focused on powering Hadoop. It can access Hive via JDBC or directly read and write to HDFS. With the complete data computing system, you can find an alternative to IOE to perform a range of data computing of whatsoever complexity. It is especially good at the computation requiring complex business logics and stored procedures.

esProc supports the professional data scripting languages, offering the true set data type, easy for algorithm design from business client's perspective, and effortless to implement the complex business logics of clients. In addition, esProc supports the ordered set for arbitrary access to the member of set and perform the serial-number-related computation. The set of set can be used to represent the complex grouping style easily, for example, the equal grouping, align grouping, and enum grouping. Users can operate on the single record in as same way of operating on an object. esProc scripts is written and presented in a grid. By this way, the intermediate result can be referenced without definition. To add the convenience, the complete code editing and debugging functions are provided. esProc can be regarded as a dynamic set-lized language which has something in common with R language, and offers native support for distributed parallel computation from the core. Programmers can surely be benefited from the efficient parallel computation of esProc while still having the simple syntax of R. It is built for the data computing, and optimized for data processing. For the complex analysis business, both its development efficiency and computing performance are beyond the existing solution of Hadoop.

The combined use of Hadoop + esProc can fully remedy the drawback to Hadoop, empowering Hadoop to replace the very most of IOE features and improving its computing capability dramatically.

November 11, 2013

esProc will alleviate Data Warehouse which desperately needs expansion


The data warehouse is essential to enterprise business intelligence, which accounts for a great part of the total enterprise cost.  With the global data explosion in recent years, the business data volume grow significantly, posing a serious challenge for enterprise data warehouse to meet the diverse and complex business demands. More data, more data warehouse applications, more concurrent accesses, higher performance, and faster I/O - all these demands give more pressure on data warehouse. Every IT manager nowadays has concern over expanding the data warehouse capacity at lower cost.

Here is an example. A data warehouse is originally provisioned, as shown below:

Server: One cluster with two high performance database servers.
Storage space: 5TB high performance disk array.
CPU: 8 high performance CPUs.
User license agreement: 100
To meet the storage capacity expansion need for the recent 12 months:
Computational performance: Double
Storage space: Quadruple
Concurrency: Double

How can an IT manager achieve his storage expansion goal? The common practice is to upgrade the database hardware and software: replace with more advanced data warehouse servers, replenish two data warehouse servers of the same class, add a 15T data-warehouse-specific disk or change to a 20T hard disk cabinet, and add 8 CPUs. In addition, they have to pay for the additional user license agreement, CPU, and disk storage space with expensive software licensing fees.

No matter which way you choose to upgrade, the data warehouse vendor will ultimately bind you with their products and charge you for the expansive upgrades.

The computation outside database is an alternative to expand storage capacity. As we all know, of the 20T data warehouse data (including 30% real data, and 70% buffer), the core data is usually less than 1/10, i.e. taking up 1T space. The remaining 19T spaces are all for the redundant data. For example, after a new application is deployed, for the sake of core data security protection, the data warehouse usually requires a copy of the used data, not allowing for the direct access to core data from application. Quite often, the new application needs the access to the records with summarized and processed core data. For which, a core-data-based intermediate table is fabricated to speed the access. Such redundant data are growing with the development of existing and emerging business. The total amount of core data will always keep low.

These redundant data is not the core data, not requiring the high level of security protection. To move these redundant data to the average PC, and use the tools other than database for reading/writing and computing, the cost of database capacity expansion will be reduced dramatically. So, we can say the computation outside database in combination with the database computing is the best choice to achieve the database capacity expansion. The benefits include:

Computationalperformance: Implement parallel computation across multiple nodes using the inexpensive PCs and desktop CPUs. Compared with the high performance of databases, the same or even greater computational performance can be achieved at the relatively lower cost.

Storagespace: With the cost-effective desktop level disk, users can get a storage space far greater than data-warehouse-specific disk at a extremely low cost. HDFS also facilitates the data security protection, access consistency, and non-stop disk capacity expansion.

Concurrency: With the concurrent access from multi-nodes, the centralized concurrent access can be allocated to multiple node machines for more accesses than just the centralized access from data warehouse. In addition, users do not have to pay for the access license agreement, additional CPUs, and disk storage spaces.

It seems that the computation outside database is pretty good. Hadoop and other alike software are available in the market to meet all above demands. But why few people take Hadoop as an option to alleviate the pressure on expanding the data warehouse capacity? This is because they are not as powerful as database in computing, in particular the computation involving complex logics.

What about there is the software meeting the above-mentioned demands on computational performance, storage space, and concurrency, while is still equal or even more powerful than database in computing? With this software, it's evident that the storage capacity expansion pressure on database will be relieved greatly, so does the database capacity expansion cost.

esProc is built to meet these demands. It is the middleware specially designed to undertake the computation jobs between database and application. For the application layer, esProc has the easy-to-use JDBC interface; For the database layer, esProc is powerful in parallel computation. By implementing the computation outside database or in external storage, esProc alleviates the computational pressure on the database & storage, and concurrency. Owing to this, organizations can cut the cost of database software and hardware effectively while still optimizing the database administration.

esProc is built with a comprehensive and well-defined computing architecture, which is fully capable of sharing the workload on databases, and undertaking various computations of whatsoever complexity for applications. In addition, esProc supports the parallel computations across multiple nodes. The massive or intensive data computation workload can be shared by multiple average servers or inexpensive PCs balancedly.

esProc is built with a comprehensive and well-defined computing architecture, which is fully capable of sharing the workload on databases, and undertaking various computations of whatsoever complexity for application. In addition, esProc supports the parallel computations across multiple nodes. The massive or intensive data computation workload can be shared by multiple average servers or inexpensive PCs balancedly.

With the supports for parallel computation, esProc can balancedly decompose and allocate the computation jobs used to solve centrally to multiple average PCs. Each node only needs to undertakes a few data computations.

With esProc, the core data can be stored in the database, while the intermediate table and script deprived from the core data can now be stored outside the database. By leveraging resources reasonably, the workload pressure on database will be alleviated effectively, database cost will be kept under control, management problems will be solved effectively, and various data warehouse applications will be handled with ease. These applications include the real-time high performance application, non-real-time big data application, desktop BI, report application, and ETL.

November 5, 2013

esProc Acting as Stored Procedure for Hadoop

Hadoop is a typical big data solution. Thanks to its inexpensive scale-out capability, It is attractive for many enterprise customers, such as eBay, Yahoo, Facebook, China Mobile, Amazon, IBM, and Intel. To solve the simple computations in Hadoop, we can use Pig, Hive, or other SQL-like languages. However, we may encounter the big data computing involving complex business logics sometimes. For database, it is easy to solve with stored procedures. What can we do for Hadoop?
        
For example, there are two tables: Order table for all orders, and Employee table for list of sales men. We need to summarize the data in big data table Order, seek the total order value for each sales person, and use the full name to indicate the userID in the Order table. One thing to note, Employee table produces junk data and we will have to sort it out according to certain rules:

1. If UserID and firstName is null or empty string, then the record is invalid.

2. UserID shall only hold the number and is invalid once there are any letters.

3. If UserID is duplicate, then only keep the last entry.

4. Remove the heading and tailing spaces of data record.

5. Shift the initials of firstName from lower case to upper case.

6. The full name shall be assembled in the form of firstName+”.”+”lastName”. But if the lastName is null or empty string, then the fullname equals to firstName.

The current Hadoop solution is not capable of handling the stored procedure. In fact, HiveQL itself is the subset of SQL. That's why the Hive is less capable than RDB in data computing. Since Hive is not available with any stored procedures specific to complex business logics, the above-mentioned computations are quite cumbersome to complete with Hive.

Such problems are usually solved in Hadoop by first processing based on MapReduce and then Java hard-coding. Working with Hive, it can be used to handle some more flexible and complex computation. However, writing MapReduce programs is too complex and the development efficiency is low. Plus, MapReduce only offers the relatively poor processing capability on set-lized data. To achieve such computational goal mentioned above, relatively strong technical skills and considerable time are required. Hive was designed to improve the development efficiency of MapRedue. It defeats the purpose and is obviously even harder if using MapReduce to assist Hive.

For those computing uncovering true business values, their business logics are quite complex. Since no stored procedure is available for Hadoop, the complex big data computing can usually be performed at a bit too high cost, and the applicability of Hadoop is always limited. Except for those large users who would invest heavily in development, normal users regard it as the ”inexpensive ETL tool of simple algorithm”.

How to empower Hadoop with stored procedures? How to perform the big data computing involving the complex business logics? esProc is quite a good choice!

esProc is a parallel computing framework software which is built with pure Java and focused on powering Hadoop. esProc provides access to Hive via JDBC as well as the ability to read/write to HDFS directly. Acting as the stored procedure of Hadoop, esProc can handle the big data computation involving complex business logics. Still the above example, esProc solution is shown below:











As can be seen, the way to solve problem with esProc is intuitive and clear:

A1, A2: Use HiveQL to total the order values settled by each sales person;    Retrieve Employee data.
D2: Create an empty result table and then store the userID and fullName in the  future.
Line 3-12: Transverse the Employee table in A2, perform the initial arrangement,  and store the result in D2. 
A12, A14: De-duplicate the records in userID of D2 through the algorithm specific  to the set-lized data.
A15-A16: Associate A1 with D2 for computing the ultimate result.
A17: Output computational result via JDBC. For example, export result to report  or embed it in Java code.

esProc is a scripting language specialized for big data, offering the true set data type, easy for algorithm design from user's perspective, and effortless to implement the complex business logics of clients. In addition, esProc supports the ordered set for arbitrary access to the member of set and perform the serial-number-related computation. The set of set can be used to represent the complex grouping style easily, for example, the equal grouping, align grouping, and enum grouping. Users can operate on the single record in as same way of operating on an object. esProc scripts is written and presented in a grid. By this way, the intermediate result can be referenced without definition. To add convenience, the complete code editing and debugging mechanism is provided. In short, esProc can be regarded as a dynamic set-lized language which has something in common with R language, and offers native support for distributed parallel computation from the core. esProc programmers are benefited from the efficient parallel computation of esProc while still having the simple syntax of R. esProc is designed for data computing and optimized for big data processing. Working with HDFS and Hive, esProc can act as the stored procedure of Hadoop to improve the development and computation efficiency.
Without stored procedures, the current Hadoop solution is only convenient for some simple querying, summarizing, and associative computations. For the complex computation yielding business results truly, the development cost is too great and the applicability is limited. esProc is introduced to empower Hadoop with such stored procedure just in time. Undoubtedly, the applicability of Hadoop is expanded greatly.