fbpx

Made in USA: Enterprise Application Services

SQL Archives - Ayoka - Made in USA Enterprise Application Services

Introduction to SQL Server 2016 CTP2

August 11, 2015
|
0 Comments
|

SQL Server 2016 Community Technology Preview 2.2 (CTP2) has been released for evaluation and trial use by Microsoft. It features highly-anticipated improvements to in-memory performance to deliver faster queries from its database engine, deeper enterprise insights through analytics with its analysis services, new always-encrypted security technology, and new hybrid cloud support.

Microsoft’s new SQL Server 2016 prioritizes mission-critical functionalities through its improvement of in-memory performance, boasting transaction speeds 30x faster and querying relational databases 100x faster than traditional disk-based database network systems. The new database engine supports in-memory OLTP (OnLine Transaction Processing) to handle user requests and improve responsiveness by defining a heavily queried table as memory optimized. OLTP tables support columnstore indexes, temporal tables, queries to Transact-SQL or Hadoop with PolyBase, MARS (Multiple Active Result Set) connections, natively compiled stored procedures, and familiar SQL Server syntax to offer truly real-time analytics to the enterprise. Although there is no UI support on SQL Server’s CTP2 for SQL Server Management Studio, Managed Backup to Microsoft Azure will support entire system databases on both custom and automated backup schedules by utilizing block blob storage for backup files in a Stretch Database. CTP2 also supports JSON query formatting, to take advantage of the minimal joins and schema changes required as an application is updated. The CTP2 database engine now creates multiple tempdb database files to retain integrity of real data as developers and users alter it. The new Query Store holds and monitors existing query plans and includes reports for performance history, while Live Query Statistics allows administrators to view execution of a running query in real-time.

SQL Server 2016 Analysis Services (SSAS) offers decision support and business intelligence (BI) solutions from compiled analytical data of internal and third-party implemented reporting tools for enterprise. New features like support of parallel processing of multidimensional databases and tabular modeled databases, configuration of computer accounts for administrators, a Database Consistency Checker (DBCC) that checks for both physical and structural corruption, and Power Pivot for running Excel data models in SharePoint, all come together to create a reliable data source for analysis and reporting. The SQL Server 2016 Reporting Services (SSRS) allows administrators to create, manage, and deploy reports to subscribed members. Reports can be compiled from relational, multidimensional, or XML-based data sources and visualized in high-DPI as charts, maps, and other custom graphical representations. SSRS supports browsers and has native applications for Windows, iOS and Android to increase accessibility.

SQL Server 2016 boasts Always Encrypted technology to protect sensitive enterprise data in the secure database while it is maintained by database administrators. Master keys seemingly manage themselves as they handle the process of both encryption and decryption of data from the driver level in an effort to minimize data compromise and ease the implementation of changes to existing database networks. In the database engine, row-level security can be utilized from its access control to manage access to data based on user metadata, while dynamic data masking is a new, policy-based, security feature that masks data queried across designated database fields from users that lack proper permissions.

SQL Server, Microsoft Azure, and the new Hybrid Cloud offer portability for enterprises looking to move into the cloud with the ability to continue to run code and house data on-premises with cloud access, or run entirely in the cloud. Azure’s Stretch Database holds operational tables in a secure environment to maintain historic data. PowerBI with on-premises data is a new interactive query that utilizes SQL Server’s Analysis Services coupled with their Always Encrypted technology. Hybrid Scenarios produces impact analysis with support to SSIS standalone or integration with Azure Data Factory. Managed Backup to Azure’s cloud service proves to reduce in-house data storage by 50% and supports larger databases with block blobs. Migration of on-premises SQL Server is as simple as point-and-click to Azure. Hybrid Cloud also offers AlwaysOn, which creates database replicas to support load-balancing.

These new functionalities in SQL Server 2016 CTP2 is just a preview of what can be utilized by enterprises for software performance optimization of database design and management, analytics, reporting, and cloud access. The final release of SQL Server 2016 will be even further optimized to meet unique business needs!

Querying Temporal Tables

August 6, 2015
|
0 Comments
|

In software performance optimization, real data sources are dynamic since business decisions often rely on the insights that can be retrieved at a moment’s notice. SQL Server 2016 has given database administrators the ability to utilize temporal tables to bring database insights to decision-makers in both real-time and from the past. These temporal tables, also known as system-versioned or assertion tables, offer the ability to query its data to be tracked as it changes over time to understand changing business trends, improve the process of auditing data as it changes, maintain slowly changing dimensions (SCD), and ease the process of recovering tables from data corruption or loss.

Temporal data tables utilize the SELECT FROM clauses with the new FOR SYSTEM_TIME expression specific for temporal table queries, along with the four sub-class expressions below:

 AS OF <date_time>AS OF queries the column data values of the temporal table along with the SystemStartTime and SystemEndTime of its corresponding history table to return a table of data with dates that were valid at the time detailed in the expression.
 FROM <start_date_time> TO <end_date_time>FROM TO queries the column data values of the temporal table along with the SystemStartTime and SystemEndTime of its corresponding history table to return record data versions that are valid between the FROM datetime and the TO datetime.
 BETWEEN <start_date_time> AND <end_date_time>BETWEEN AND queries the column data values of the temporal table along with the SystemStartTime and SystemEndTime of its corresponding history table to return a table of data that was created or otherwise valid both on and between the BETWEEN start datetime and the AND end datetime.
CONTAINED IN (<start_date_time>, <end_date_time>)CONTAINED IN queries the column data values of the temporal table and its corresponding history table to return the data of all record versions that were opened and closed within the SystemStartTime and SystemEndTime per the start datetime and end datetime in the expression.

These temporal data query expressions come together to lower processing speed, improve productivity and decision-making, and simplify data aggregation for auditing and reporting purposes. Although SQL Server 2016 has restricting parameters on altering tables and record data to ensure security and integrity for auditing purposes, database administrators with proper permissions are able to rebuild tables, create indices, and extract statistics for the both live temporal table and its history data table.

As a Microsoft partner with experience in SQL database design and maintenance services, Ayoka Systems brings expertise in database architecture that spans industries, from health care to marketing and inventory management, to meet your enterprise database management needs.

Introduction to Temporal Tables

August 4, 2015
|
0 Comments
|

Microsoft has recently released SQL Server 2016 as Community Technology Preview 2 (CTP2) and utilizes temporal tables to simplify the process of auditing records and restoring tables by retrieving its temporal history data. Creating new temporal table schemas and scaling existing database schemas to the new system-versioned architecture prove to optimize processing speeds and add an extra layer of security to tables in the database.

System-Versioned TablesA temporal table, also referred to as a system-versioned table, retains the past values of each record so the information can be queried at any point in time from a corresponding history table, rather than solely the current value of each record as it is updated. So for any given record there is an enumerated current version in its current table and zero or more prior versions in the temporal history table. When an assertion is instantiated by UPDATE or DELETE actions at the specific time the action was executed, the record’s current version is kept in its table while its previous versions are created as rows of data in the temporal table in the database.

Creating a new temporal table is simple. It requires enabling system-versioning for the table, declaring not-null datetime columns to retain the historical data as its records are updated, tying the current table to the history table with primary key constraints, and instantiation of all data columns for the history table to retain the data to be queried. Working with similar, non-temporal, schemas that are triggered by events can be easily converted to temporal tables by executing ALTER TABLE, hiding existing columns that do not need to be updated, and updating schemas to match the system-versioning style by removing triggers/change data capture processes. SQL Server’s data consistency check will run to ensure that there are no overlapping schemas or gaps. The new temporal table will have the same set of columns as the current table, but with all constraints removed and its given its own set of indexes and statistics so that it can retain its validity. This allows the user to query the temporal history table with specific temporal queries, shortening the query time and simplifying the entire process of extracting historical data. SQL Server 2016 supports indexing strategies for implementation of clustered/non-clustered rowstores to optimize both storage size and database performance.

Integration of system-versioning tables greatly simplifies querying historical data and serves as a more secure process to save current version data while retaining the integrity of historical data as it is updated, improving data management capabilities. Permissions required for the creation and altering of temporal history table schemas add an extra layer of security, but the temporal history table itself, while it’s not read-only, cannot be modified directly regardless of permissions granted to the user. Dropping and altering columns, deleting rows of data, and dropping the table are all disallowed, and the database audit log shows all attempts to create or alter a temporal history table. This functionality retains its integrity as a secure and reliable history table and proves useful for auditing records.

For software performance optimization, temporal tables aid in the reduction of database querying time, improves security with its structural modification requisites and the ability to restore corrupted or lost data, and increases the validity historical data to be audited as it changes over time. At Ayoka Systems, our database design services offer insights of the best practices for optimal database engineering for updates, maintenance, and integration for an array of industries while abiding by industry and governmental regulations.

3 Easy Steps to Oracle Database Connectivity

January 9, 2014
|
0 Comments
|

You’re a .Net developer. You’ve worked with MS SQL Server, PostgreSQL and MySQL databases on more than a few projects.  You feel pretty good about your abilities, right? Now, enter an opportunity to work with Oracle.
Oracle Database connectivity
Oracle shouldn’t be too bad, right?  How different could it be? To the inexperienced, one might think that there really is no difference at all.  However, depending on the client and the level of security desired it could be vastly different.

Before you begin re-evaluating your life choices, remember it’s only technology and it can be conquered and we will show you the three easy steps you need to take! Read more here…

Top Differences in Code Behind and Reporting Services RDL

September 10, 2013
|
0 Comments
|

Ok, so I’ll try to keep this brief. For one our long-term projects (read: older!), we are using ASP.NET v3.5 WebForms, which is a precursor to the MVC design model of building applications. Currently in our .NET applications we are using MVC v.4 with Entity Framework v.5. So, in our project, each webpage is written and saved as an aspx file. In the aspx you will have all your traditional html elements – JavaScript, jQuery and CSS declarations. With each aspx file you have a tightly coupled aspx.cs file. This is known as the code behind file. The code behind acts, if you will, as the page’s personal controller file. In MVC you will have one controller file that will handle all the interactions for a complete directory, whereas mentioned above the code behind handles all the interactions of the individual page that it is associated with. Through the code behind we can access the service / data layer. This allows us to call stored procedures using table adapters that are data sets which represent the database tables. These table adapters also store information about the stored procedures and defines which variables are passed into the SQL stored procedure and how the tabular data returned from the procedure is to be received. Once the data is passed back to us, we can utilize the C# .NET coding libraries to further manipulate the incoming data, format it and perform calculations as needed.

For our manufacturing plants, we have a plant performance summary report, which is one of eight reports that are run directly from the MS SQL reporting services. The Report Viewer being used is v.9 and is specific to SQL Server 2005. It is in a separate project within the overall project. A set of global (corporate) reports run from RDL files. These files are nothing more than a definition of how the output should be shown on the page (or the report itself). The Reporting Viewer takes in, if applicable, input values that are sent to the SQL Stored Procedure that gathers the requested data and returns it to the RDL file for final output. There is no further manipulation done to the data, save for formatting issues like date formatting, two / three decimal points and the like.

ASP.NET v3.5 WebForms versus MVC

So for our plant performance report, we want certain utilization and equipment availability calculations. In all previous occurrences that we perform these calculations we have the luxury to retrieve data from the database and perform loops and other various tricks to get the correct ordering of equipment statuses and durations to perform the calculations needed. Since there is no code behind being utilized we have to create a way to perform those same types of tricks to output the correct data within the Stored Procedure by using temporary tables and SQL custom functions.

In either process (stored procedure or aspx code behind), this is a cumbersome proposition, but handling it in a .NET C# environment is a lot easier than in a stored procedure environment.

Invoking Managed Code from SQL Server

September 1, 2009
|
0 Comments
|

One of the flagship features in SQL Server 2005 is the ability to write stored procedures using any language supported in .NET. This feature provides greater flexibility for developers to implement complex logic in CLR (Common Language Runtime), stored procedures which are not easily programmed using T-SQL stored procedure.

There are two parts to a CLR function – an assembly function and a function object in SQL itself. You cannot use it, however, until you enable CLR. The following steps should help to accomplish the task of invoking managed code from SQL Server.

Step 1

By default CLR integration is disabled in SQL Server 2005. To enable the managed code execution feature in SQL Server, execute the following in query editor window

EXEC sp_configure ‘clr enabled’, ‘1’
go
reconfigure;
go

“sp_configure” is a stored procedure that is available to change the server level settings. Depending on the change, some option settings may take effect on the fly by just reconfiguring while some might require the server to be stopped and started again.

You can also enable the CLR integration using Surface Area Configuration from the SQL Server 2005 configuration tools.

Step 2

Now we need to set database level permissions before deploying assemblies. A dirty and easy way is to set the trust worthy property of the database to “on”, which is set to “off” by default.

ALTER DATABASE SET TRUSTWORTHY ON

There are other ways of allowing external access or unsafe assemblies in the database. A more appropriate way would be to use keys for the assembly and grant access to them with permission settings in the database.

Step 3

Now it is time to deploy the assembly in the database. You would need to create assembly for every dll being referenced by the managed code before creating the assembly for the managed code.

CREATE ASSEMBLY
AUTHORIZATION dbo
FROM ‘ ’
WITH PERMISSION_SET = UNSAFE
GO

The path of the “dll” must be an absolute path, not a relative path. Do not use function overloading in the managed code as it is not supported by SQL.

Step 4

Create a function or procedure that interacts with the managed code.

CREATE PROCEDURE @param1 type,
@param2 type
AS EXTERNAL NAME .[.].
GO

You are now all set to call the stored procedure and invoke managed code.

SQL Server Best Practices – Ayoka Lunch n’ Learn Series

May 7, 2009
|
0 Comments
|

About once a month or so, us developers here at Ayoka like to get together over lunch to discuss various software development topics in a series we call Ayoka Lunch n’ Learn. This month our resident SQL Server expert, Steve Chang discussed SQL Server best practices. You can check out the the four part video series below. You can also download the complete PowerPoint presentation here: SQL Server Best Practices – Ayoka Lunch n’ Learn.

Part 1

Part 2

Part 3

Part 4

Setting up Apache, MySQL and PHP for Mac developers

January 16, 2009
|
0 Comments
|

I’ve been playing with Mac OS X Leopard for a little while now, and I really like all the development tools it comes with out of the box. For example, Apache 2, PHP, Perl, and Python all come bundled with the operating system. MySQL, however, is not included by default (although it is included in the server version of Leopard), and since most PHP development is usually focused around MySQL it’s pretty important to get the two running together. Going through the various documentation available online I didn’t find all the instructions available in one place, so I’m posting it here as a reference for me and anyone else interested in setting it up:

1. Download and install MySQL Community Server (package format) from http://dev.mysql.com/downloads/. Make sure you also install the MySQLStartupItem in the same package so that MySQL startup automatically.

2. Download and install MySQL GUI Tools (Universal binaries) from the same location as above. You can now login to localhost using the MySQL Query Browser or MySQL Administrator tool using root as the username and no password.

3. Enable Web sharing by checking the corresponding item from the Apple Menu >  System Preferences > Sharing. This will enable the Apache web server.

4. Uncomment the line starting with “LoadModule php5_module” in /private/etc/apache2/httpd.conf by removing the hash mark (#).

5. Copy the file /etc/php.ini.default to /etc/php.ini if it doesn’t already exist.

6. Edit /etc/php.ini and change:

mysql.default_socket =

to

mysql.default_socket = /tmp/mysql.sock

and

mysqli.default_socket =

to

mysqli.default_socket = /tmp/mysql.sock

7. That’s it. Now just restart apache (from the terminal) using the command sudo apachectl restart and you’re done.