Database Weekly is the seven-day roundup from SQLServerCentral. We scour the Internet with a fine-toothed set of queries to compile the most informative weekly newsletter about SQL Server. Over 670,000 SQL Server professionals rely on it to kick-start their week.

Sign up to our newsletter now, to keep up to date with all the latest news.

Latest Database Weekly Newsletter Articles


The Sorry State of .NET ORMs
The biggest glaring thing that struck me as needing improvement was the lack of consistency among products, a really great starting point would be a check list so that new users researching and looking for a great ORM to fit their project would be able to tell almost instantly which boxes are ticked. read more...

Log Shipping Magic: Using A Differential Backup to Refresh a Delayed Secondary
Run a compressed differential backup against your log shipping primary. Leave all the log shipping backup and copy jobs running, though — you don’t need to expose yourself to the potential of data loss. After the differential backup finishes, copy it over to a nice fast place to restore to your secondary server. Disable the log shipping restore job for that database, and restore the differential backup with NORECOVERY. This will effectively catch you up, and then you can re-enable the log shipping restore. read more...

Investigating Deadlocks
We can get more information about deadlocks by tracing the Deadlock Graph event in the SQL Profiler. You can right-click the TextData column of the deadlock graph trace and save the payload as an XML file. Then, you can open the file in your favorite XML editor or Internet Explorer and see exactly which SQL statements are conflicting and which process was terminated. read more...

JUNK SCIENCE
We prefer to work with tools and results that are familiar to us and support our beliefs. How many times have you heard people vehemently oppose anything made by Microsoft, no matter if it is the right choice for them or not? read more...

http://www.sqlsafety.com/?p=908
A log chain starts with a full backup of the database. Usually, a new log chain is only started when the database is backed up for the first time or after the recovery model is switched from simple recovery to full or bulk-logged recovery. This is accurate enough except for the time between changing the recovery model to full or bulk logged and when a database is backed up. After the database recovery model is changed the transaction log will continue to be be truncated upon checkpoint. This recovery state is sometimes referred to as “Pseudo-Simple” recovery model. read more...

SQL Databases: General Availability of Azure SQL Database Service Tiers
I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium. The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure. read more...

Locating business logic in applications
One of the most pernicious and pervasive myths in business-application development is that all business logic must be kept from the relational database and contained in a separate ‘business layer’. read more...

Analysis Services Partitions
Partitions are a great feature in Analysis Services that allow you to split measure groups in to separate physical files. All measure groups by default have one partition but by splitting that partition you will gain improved query and processing performance. read more...

Tip: RECOVERY_PENDING is not the same as RECOVERING
One of the more common scenarios for RECOVERY_PENDING is that a database file is missing, or renamed, or moved, or who knows what. That’s definitely the place to start. After all, the error log – remember the error log? – will often tell you that this it the problem. Note that I do not recommend rebooting the server, or even restarting SQL. It’s a problem with a single database read more...

New Add-on for SSMS: T-SQL Flex
T-SQL Flex is a free, open-source add-on for SQL Server Management Studio that does only two things: It scripts SQL query results to INSERT statements. It scripts SQL query results to Excel-compatible spreadsheets (specifically, XML Spreadsheet 2003 format). read more...

Identifying large queries using Server Side Traces
Who are your worst offenders? By offenders I mean, queries that are consuming the most resources on your server(s). I know Extended Events have a lower impact but I like server side traces. Not only because once you’ve set one up, setting up others is very simple but also because there’s a really good tool called ClearTrace that can be used to analyse the output. read more...

Using Google Charts API to Visualize Schema Changes
Last week I have worked on the new email report using Google Charts and liked it so much that decided to share it here with anyone who finds it useful. It uses information from a Schema Changes Audit table which is being maintained by the DDL Trigger. The relevant record is added to this table every time anyone changes objects on the server . read more...

Passwords
A password means that you have secret information that only you have. It’s what ‘secret’ means. As soon as you tell that secret information to multiple places, it’s not secret any more. Anyone who has seen my passport knows where I was born, and there are plenty of ways to work out my mother’s maiden name, yet these are considered ‘secret’ information that can be used to check that I’m me. read more...

How to Write a Git Commit Message
Separate subject from body with a blank line,limit the subject line to 50 characters, capitalize the subject line, do not end the subject line with a period, use the imperative mood in the subject line, wrap the body at 72 characters and use the body to explain what and why vs. how read more...

Choosing SEQUENCE instead of IDENTITY? Watch your step.
Server side DDL triggers do not have a default schema. It is a must to add Schema name to the Sequence call inside the Table Default definition. I admit, I frequently forget adding schema name if I am using the dbo schema. The fact that SQL Server does not produce the correct error message looks like a bug. read more...

SQL Functions and Plan Caching
I had a table valued Function in an OLTP environment that was taking a lot of compilation energy (6.7 seconds) every execution. I altered the function so that it has multiple lines! This way SQL Server will treat the 'multistatement table-valued function' as a stored procedure and enable it to be cached. You can see from the execution results below that their is no parse and compile time, and the execution time is the same 125ms. read more...

MAD (Microsoft Azure DocumentDB) Reference Documentation
Microsoft Azure DocumentDB is a document-oriented, NoSQL database service designed for modern mobile and web applications. DocumentDB delivers consistently fast reads and writes, schema flexibility and the ability to easily scale a database up and down on demand. DocumentDB enables complex ad hoc queries using the SQL dialect, supports well defined consistency levels, and offers JavaScript language integrated, multi-document transaction processing using the familiar programming model of stored procedures, triggers and UDFs read more...

DocumentDB SQL Query Language
Azure DocumentDB supports querying of documents using a familiar SQL (Structured Query Language) like grammar over hierarchical JSON documents without requiring explicit schema or creation of secondary indexes. This topic provides reference documentation for the DocumentDB SQL Query language. read more...

DocumentDB programming: Stored procedures, triggers, and UDFs
DocumentDB’s language integrated, transactional execution of JavaScript lets developers write stored procedures, triggers and user defined functions (UDFs) natively in JavaScript. This allows developers to write application logic which can be shipped and executed directly on the database storage partitions. read more...

Comparing SQL Server Datatypes, Size and Performance for Storing Numbers
There are times when there is more than one data type which could hold the values needed. However, there are performance and size reasons to generally prefer the smaller data type. To look at this let us create three tables with 150 million rows and give them primary keys, but no other indexes. One will use INT, the other BIGINT, and the final one will use DECIMAL. read more...