Hekaton & SQL Server 2014
Hekaton & SQL Server 2014 is the new revolution in database management. After getting involved with Crypto Currency Mining Recently, purchasing some GAW Miner Cloud Hashlets, i became accustom to frequenting Hash Talk, their community forum. On there a question was asked Who is Craig? That riddle sent me on a quest of knowledge to find an answer.
I am by no means an expert, but after several hours reading i thought i would share my conclusions and knowledge I gathered during this quest.
That quest led me to contrast reduction algorythms (where by you use a test set of static data to approximate and a small live set to make computation much easier) which seem to have been proven very accurate through analysis of Aerosol Optical Thickness (AOT) and MR images in seperate research.
This lead me to SQL Server 2014 which seem to have incorporated this in their new systems. One step further is Hekaton which builds on the SQL Server 2014 framework to create an in memory database opposed to an on disk db. This has substantial improvements for Online Transaction Processes while reducing heat and power consumption.
SQL Server 2014
SSRS enables users to quickly and easily generate reports from Microsoft SQL Server databases.
The SSRS service provides a unique interface into Microsoft Visual Studio so that developers as well as SQL administrators can connect to SQL databases and use SSRS tools to format SQL reports in many complex ways. SSRS also provides a ‘Report Builder’ tool for less technical IT workers to format SQL reports of lesser complexity.
SQL Server 2014 was released to manufacturing on March 18, 2014, and released to the general public on April 1, 2014. Until November, 2013 there were two CTP revisions, CTP1 and CTP2. SQL Server 2014 provides a new in-memory capability for tables that can fit entirely in memory (also known as Hekaton). Whilst small tables may be entirely resident in memory in all versions of SQL Server, they also may reside on disk, so work is involved in reserving RAM, writing evicted pages to disk, loading new pages from disk, locking the pages in RAM while they are being operated on, and many other tasks. By treating a table as guaranteed to be entirely resident in memory much of the ‘plumbing’ of disk-based databases can be avoided.
For disk-based SQL Server applications, it also provides the SSD Buffer Pool Extension, which can improve performance by cache between DRAM and spinning media.
SQL Server 2014 also enhances the AlwaysOn (HADR) solution by increasing the readable secondaries count and sustaining read operations upon secondary-primary disconnections, and it provides new hybrid disaster recovery and backup solutions with Windows Azure, enabling customers to use existing skills with the on-premises version of SQL Server to take advantage of Microsoft’s global datacenters. In addition, it takes advantage of new Windows Server 2012 and Windows Server 2012 R2 capabilities for database application scalability in a physical or virtual environment.
Embed data mining functionality directly into an application. You can include Analysis Management Objects (AMO), which contains a set of objects that your application can use to create, alter, process, and delete mining structures and mining models. Alternatively, you can send XML for Analysis (XMLA) messages directly to an instance of Analysis Services. For more information
Custom plug-in algorithms
Analysis Services provides a mechanism for creating your own algorithms, and then adding the algorithms as a new data mining service to the server instance.
Analysis Services uses COM interfaces to communicate with plugin algorithms. To learn more about how to implement new algorithms, see Plugin Algorithms.
Choosing the best algorithm to use for a specific analytical task can be a challenge. While you can use different algorithms to perform the same business task, each algorithm produces a different result, and some algorithms can produce more than one type of result. For example, you can use the Microsoft Decision Trees algorithm not only for prediction, but also as a way to reduce the number of columns in a dataset, because the decision tree can identify columns that do not affect the final mining model.
However, there is no reason that you should be limited to one algorithm in your solutions. Experienced analysts will sometimes use one algorithm to determine the most effective inputs (that is, variables), and then apply a different algorithm to predict a specific outcome based on that data. SQL Server data mining lets you build multiple models on a single mining structure, so within a single data mining solution you might use a clustering algorithm, a decision trees model, and a naïve Bayes model to get different views on your data. You might also use multiple algorithms within a single solution to perform separate tasks: for example, you could use regression to obtain financial forecasts, and use a neural network algorithm to perform an analysis of factors that influence sales.
“Hekaton, in contrast, is a row-based technology squarely focused on transaction processing (TP or OLTP-OnLine TP) workloads,” explained Campbell. “Note that these two approaches are not mutually exclusive. The combination of Hekaton and SQL Server’s existing xVelocity columnstore index and xVelocity analytics engine, will result in a great combination,” he concluded.
For now, Hekaton is being tested by a select set of customers, including financial services and online gaming companies with “extremely demanding TP requirements,” revealed Campbell. Microsoft is gearing up for a public technology preview to be announced via the company’s blogs.
Hekaton is a new database engine optimized for memory resident data and OLTP workloads that is fully integrated into Microsoft SQL Server. A key innovation that enables high performance in Hekaton is compilation of SQL stored procedures into machine code.
Hekaton is a new database engine targeted for OLTP workloads under development at Microsoft. It is optimized for large main memories and many-core processors. It is fully integrated into SQL Server, which allows customers to gradually convert their most performance-critical tables and applications to take advantage of the very substantial performance improvements offered by Hekaton.
Hekaton achieves its high performance and scalability by using very efficient latch-free data structures, multiversioning, a new optimistic concurrency control scheme, and by compiling T-SQL stored procedure into efficient machine code. As evidenced by our experiments, the Hekaton compiler reduces the instruction cost for executing common queries by an order of magnitude or more.
Read More on the Hekaton & SQL Server 2014 & CR algorithms
In this paper, the urban BRDF model was applied to AOT inversion with the improved contrast reduction (CR) algorithm. The measured data from AERONET stations has been collected to validate the algorithm and evaluate the accuracy. Results show the AOT inversion can reach a high precision.
A method for computing matrix CR bounds for image reconstruction problems using an iterative algorithm that avoids the intractable inversion of the Fisher matrix required by direct methods.
An iterative algorithm for approximating the CR bound. The method does not require in-verting the Fisher information matrix and requires only O(n2) flops/iteration. The algorithm iteratively generates a sequence of approximation matrices which converges with exponential convergence rate to the actual CR bound matrix FG1.
The key to the algorithm is the specification of a diagonal “splitting” matrix which has the proper ties.
If these eigenvalues are nonnegative then the algorithm gives a sequence of approximations that are actually lower bounds which converge monotonically to the CR bound.