Transactional Information Systems is the long-awaited, comprehensive work from leading scientists in the transaction processing field. Weikum and Vossen begin with a broad look at the role of transactional technology in today's economic and scientific endeavors, then delve into critical issues faced by all practitioners, presenting today's most effective techniques for controlling concurrent access by multiple clients, recovering from system failures, and coordinating distributed transactions. The authors emphasize formal models that are easily applied across fields, that promise to remain valid as current technologies evolve, and that lend themselves to generalization and extension in the development of new classes of network-centric, functionally rich applications. This book's purpose and achievement is the presentation of the foundations of transactional systems as well as the practical aspects of the field what will help you meet today's challenges. Provides the most advanced coverage of the topic available anywhere--along with the database background required for you to make full use of this material. Explores transaction processing both generically as a broadly applicable set of information technology practices and specifically as a group of techniques for meeting the goals of your enterprise. Contains information essential to developers of Web-based e-Commerce functionality--and a wide range of more "traditional" applications. Details the algorithms underlying core transaction processing functionality.
Author: Philip A. Bernstein
Publisher: Morgan Kaufmann
Release Date: 2009-07-24
Principles of Transaction Processing is a comprehensive guide to developing applications, designing systems, and evaluating engineering products. The book provides detailed discussions of the internal workings of transaction processing systems, and it discusses how these systems work and how best to utilize them. It covers the architecture of Web Application Servers and transactional communication paradigms. The book is divided into 11 chapters, which cover the following: Overview of transaction processing application and system structure Software abstractions found in transaction processing systems Architecture of multitier applications and the functions of transactional middleware and database servers Queued transaction processing and its internals, with IBM's Websphere MQ and Oracle's Stream AQ as examples Business process management and its mechanisms Description of the two-phase locking function, B-tree locking and multigranularity locking used in SQL database systems and nested transaction locking System recovery and its failures Two-phase commit protocol Comparison between the tradeoffs of replicating servers versus replication resources Transactional middleware products and standards Future trends, such as cloud computing platforms, composing scalable systems using distributed computing components, the use of flash storage to replace disks and data streams from sensor devices as a source of transaction requests. The text meets the needs of systems professionals, such as IT application programmers who construct TP applications, application analysts, and product developers. The book will also be invaluable to students and novices in application programming. Complete revision of the classic "non mathematical" transaction processing reference for systems professionals. Updated to focus on the needs of transaction processing via the Internet-- the main focus of business data processing investments, via web application servers, SOA, and important new TP standards. Retains the practical, non-mathematical, but thorough conceptual basis of the first edition.
Technologies for web applications -- Data model -- Hypertext model -- Content management model -- Advanced hypertext model -- Overview of the development process -- Requirements specifications -- Data design -- Hypertext design -- Architecture design -- Data implementation -- Hypertext implementation -- Advanced hypertext implementation -- Tools for model-based development of web applications.
Fuzzy Modeling and Genetic Algorithms for Data Mining and Exploration is a handbook for analysts, engineers, and managers involved in developing data mining models in business and government. As you'll discover, fuzzy systems are extraordinarily valuable tools for representing and manipulating all kinds of data, and genetic algorithms and evolutionary programming techniques drawn from biology provide the most effective means for designing and tuning these systems. You don't need a background in fuzzy modeling or genetic algorithms to benefit, for this book provides it, along with detailed instruction in methods that you can immediately put to work in your own projects. The author provides many diverse examples and also an extended example in which evolutionary strategies are used to create a complex scheduling system. Written to provide analysts, engineers, and managers with the background and specific instruction needed to develop and implement more effective data mining systems Helps you to understand the trade-offs implicit in various models and model architectures Provides extensive coverage of fuzzy SQL querying, fuzzy clustering, and fuzzy rule induction Lays out a roadmap for exploring data, selecting model system measures, organizing adaptive feedback loops, selecting a model configuration, implementing a working model, and validating the final model In an extended example, applies evolutionary programming techniques to solve a complicated scheduling problem Presents examples in C, C++, Java, and easy-to-understand pseudo-code Extensive online component, including sample code and a complete data mining workbench
The two-volume set LNAI 9119 and LNAI 9120 constitutes the refereed proceedings of the 14th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2015, held in Zakopane, Poland in June 2015. The 142 revised full papers presented in the volumes, were carefully reviewed and selected from 322 submissions. These proceedings present both traditional artificial intelligence methods and soft computing techniques. The goal is to bring together scientists representing both areas of research. The first volume covers topics as follows neural networks and their applications, fuzzy systems and their applications, evolutionary algorithms and their applications, classification and estimation, computer vision, image and speech analysis and the workshop: large-scale visual recognition and machine learning. The second volume has the focus on the following subjects: data mining, bioinformatics, biometrics and medical applications, concurrent and parallel processing, agent systems, robotics and control, artificial intelligence in modeling and simulation and various problems of artificial intelligence.
Author: David C. Hay
Release Date: 2010-07-20
Data Model Patterns: A Metadata Map not only presents a conceptual model of a metadata repository but also demonstrates a true enterprise data model of the information technology industry itself. It provides a step-by-step description of the model and is organized so that different readers can benefit from different parts. It offers a view of the world being addressed by all the techniques, methods, and tools of the information processing industry (for example, object-oriented design, CASE, business process re-engineering, etc.) and presents several concepts that need to be addressed by such tools. This book is pertinent, with companies and government agencies realizing that the data they use represent a significant corporate resource recognize the need to integrate data that has traditionally only been available from disparate sources. An important component of this integration is management of the "metadata" that describe, catalogue, and provide access to the various forms of underlying business data. The "metadata repository" is essential to keep track of the various physical components of these systems and their semantics. The book is ideal for data management professionals, data modeling and design professionals, and data warehouse and database repository designers. A comprehensive work based on the Zachman Framework for information architecture—encompassing the Business Owner's, Architect's, and Designer's views, for all columns (data, activities, locations, people, timing, and motivation) Provides a step-by-step description of model and is organized so that different readers can benefit from different parts Provides a view of the world being addressed by all the techniques, methods and tools of the information processing industry (for example, object-oriented design, CASE, business process re-engineering, etc.) Presents many concepts that are not currently being addressed by such tools — and should be
Author: Jim Melton
Publisher: Morgan Kaufmann
Release Date: 2003
This guide documents SQL: 1999Us advanced features in the same practical, "programmercentric" way that the first volume documented the language's basic features. This is no mere representation of the standard, but rather authoritative guidance on making an application conform to it, both formally and effectively.
Author: Jim Melton
Publisher: Morgan Kaufmann
Release Date: 2011-04-08
XML has become the lingua franca for representing business data, for exchanging information between business partners and applications, and for adding structure– and sometimes meaning—to text-based documents. XML offers some special challenges and opportunities in the area of search: querying XML can produce very precise, fine-grained results, if you know how to express and execute those queries. For software developers and systems architects: this book teaches the most useful approaches to querying XML documents and repositories. This book will also help managers and project leaders grasp how “querying XML fits into the larger context of querying and XML. Querying XML provides a comprehensive background from fundamental concepts (What is XML?) to data models (the Infoset, PSVI, XQuery Data Model), to APIs (querying XML from SQL or Java) and more. * Presents the concepts clearly, and demonstrates them with illustrations and examples; offers a thorough mastery of the subject area in a single book. * Provides comprehensive coverage of XML query languages, and the concepts needed to understand them completely (such as the XQuery Data Model). * Shows how to query XML documents and data using: XPath (the XML Path Language); XQuery, soon to be the new W3C Recommendation for querying XML; XQuery's companion XQueryX; and SQL, featuring the SQL/XML * Includes an extensive set of XQuery, XPath, SQL, Java, and other examples, with links to downloadable code and data samples.
SQL: 1999 is the best way to make the leap from SQL-92 to SQL:1999, but it is much more than just a simple bridge between the two. The latest from celebrated SQL experts Jim Melton and Alan Simon, SQL:1999 is a comprehensive, eminently practical account of SQL's latest incarnation and a potent distillation of the details required to put it to work. Written to accommodate both novice and experienced SQL users, SQL:1999 focuses on the language's capabilities, from the basic to the advanced, and the ways that real applications take advantage of them. Throughout, the authors illustrate features and techniques with clear and often entertaining references to their own custom database. Gives authoritative coverage from an expert team that includes the editor of the SQL-92 and SQL:1999 standards. Provides a general introduction to SQL that helps you understand its constituent parts, history, and place in the realm of computer languages. Explains SQL:1999's more sophisticated features, including advanced value expressions, predicates, advanced SQL query expressions, and support for active databases. Explores key issues for programmers linking applications to SQL databases. Provides guidance on troubleshooting, internationalization, and changes anticipated in the next version of SQL. Contains appendices devoted to database design, a complete SQL:1999 example, the standardization process, and more.
Author: Ian H. Witten
Release Date: 2005-07-13
Data Mining, Second Edition, describes data mining techniques and shows how they work. The book is a major revision of the first edition that appeared in 1999. While the basic core remains the same, it has been updated to reflect the changes that have taken place over five years, and now has nearly double the references. The highlights of this new edition include thirty new technique sections; an enhanced Weka machine learning workbench, which now features an interactive interface; comprehensive information on neural networks; a new section on Bayesian networks; and much more. This text is designed for information systems practitioners, programmers, consultants, developers, information technology managers, specification writers as well as professors and students of graduate-level data mining and machine learning courses. Algorithmic methods at the heart of successful data mining—including tried and true techniques as well as leading edge methods Performance improvement techniques that work by transforming the input or output
Author: Joe Celko
Publisher: Morgan Kaufmann
Release Date: 2008-01-22
Perfectly intelligent programmers often struggle when forced to work with SQL. Why? Joe Celko believes the problem lies with their procedural programming mindset, which keeps them from taking full advantage of the power of declarative languages. The result is overly complex and inefficient code, not to mention lost productivity. This book will change the way you think about the problems you solve with SQL programs.. Focusing on three key table-based techniques, Celko reveals their power through detailed examples and clear explanations. As you master these techniques, you’ll find you are able to conceptualize problems as rooted in sets and solvable through declarative programming. Before long, you’ll be coding more quickly, writing more efficient code, and applying the full power of SQL • Filled with the insights of one of the world’s leading SQL authorities - noted for his knowledge and his ability to teach what he knows. • Focuses on auxiliary tables (for computing functions and other values by joins), temporal tables (for temporal queries, historical data, and audit information), and virtual tables (for improved performance). • Presents clear guidance for selecting and correctly applying the right table technique.
Tuning your database for optimal performance means more than following a few short steps in a vendor-specific guide. For maximum improvement, you need a broad and deep knowledge of basic tuning principles, the ability to gather data in a systematic way, and the skill to make your system run faster. This is an art as well as a science, and Database Tuning: Principles, Experiments, and Troubleshooting Techniques will help you develop portable skills that will allow you to tune a wide variety of database systems on a multitude of hardware and operating systems. Further, these skills, combined with the scripts provided for validating results, are exactly what you need to evaluate competing database products and to choose the right one. Forward by Jim Gray, with invited chapters by Joe Celko and Alberto Lerner Includes industrial contributions by Bill McKenna (RedBrick/Informix), Hany Saleeb (Oracle), Tim Shetler (TimesTen), Judy Smith (Deutsche Bank), and Ron Yorita (IBM) Covers the entire system environment: hardware, operating system, transactions, indexes, queries, table design, and application analysis Contains experiments (scripts available on the author's site) to help you verify a system's effectiveness in your own environment Presents special topics, including data warehousing, Web support, main memory databases, specialized databases, and financial time series Describes performance-monitoring techniques that will help you recognize and troubleshoot problems