If you want to work in MS SQL Server field. Going over the MS SQL Server technical interview questions, on the other hand, isn't exactly a bed of roses. The following list of MS SQL Server interview questions and answers can assist you whether you are a candidate seeking a job or a recruiter looking for the best MS SQL Server. You can use it to model other questions based on this pattern or answer questions that are similar to this one.
Master Data Services Configuration Manager is the tool that we use to create or configure a Master Data Services database.
After we create a database, we can use Master Data Services Configuration Manager to create a web application and to enable
integration with Data Quality Services. Finally, we can use Master Data Services Configuration Manager to associate an MDS
database with an MDS web application.
This database property allows SQL Server to automatically update the statistics when they are deemed outdated. The update
occurs before executing a query if certain conditions are met, or after the query is executed if Auto Update Statistics
Asynchronously is used instead.
Installation-related logs are stored under the shared feature directory folder which was selected at the time of the first SQL Server instance installation.
e.g. If Shared Features were selected to be placed on the “C:\Program Files\Microsoft SQL Server” then logs will be created under
%programfiles%\Microsoft SQL Server\110\Setup Bootstrap\Log\<YYYYMMDD_HHMM>\
C:\Program Files\Microsoft SQL Server\100\Setup BootStrap\Release
Before SQL Server 2008 if there was a requirement to rebuild the master databases then its required SQL Server complete Setup.
But starting SQL Server 2008 we don’t need SQL server setup anymore, system databases can be rebuild with the setup.exe
located in the above location.
The three conditions that will trigger an update if one is met are:
SQL Server Setup generates a configuration file named ConfigurationFile.ini, based upon the system default and run-time
inputs. The ConfigurationFile.ini file is a text file that contains the set of parameters in name/value pairs along with
descriptive comments. Many of the parameter names correspond to the screens and options which you see while installing
SQL Server through the wizard. We can then use the configuration file to install SQL Server with the same configuration
instead of going through each of the installation screens.
When we install Microsoft SQL Server, five system databases are automatically created for each SQL Server instance. These system databases allow the database engine and administrative applications to properly manage the system:
• master
• model
• msdb
• tempdb
• Resource (SQL Server 2005 and higher only)
Starting with SQL Server 2012, SQL Server Query Optimizer creates and maintains temporary statistics in tempdb database for
the read-only database or read-only snapshot database or readable secondaries in the AlwaysOn cluster in order to ensure your
queries perform better. These temporary statistics are created and maintained by SQL Server Query Optimizer only; though you
can delete them when not needed. These statistics are suffixed with “_readonly_database_statistic” to differentiate it from the
regular statistics.
We can find the configuration file in the C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log folder. There will be a
subfolder based on a timestamp of when the SQL Server 2012 installation was done.
The Master database is the heart and soul of SQL Server. It basically records all the system level information. Every instance of SQL Server will have an independent Master database; as it captures instance level configuration information. The information which is captured in the Master database includes SQL Server instance level configurations, linked server configurations, SQL Server Logins, Service Broker Endpoints, System Level Stored Procedures, and System level Functions etc. The system and user databases related information such as name and location for user and system database are captured in Master database.
If master database is corrupted or if it is not available then the SQL Server Service will not start. In SQL Server 2005 and later
versions the system objects are stored in Resource Database rather than in Master Database. The Master database is created
using Simple Recovery Model.
This is the SQL Server Instance name that all client applications will use to connect to the SQL Server.
Since SQL Server optimizer use the statistics to create execution plans for the SQL Server queries. It is very important to
update the statistics time to time to make sure that the SQL Server uses the best optimized plan to execute the SQL Server
queries.
Based on the selected components while doing the installation we will find the respective services for each component in the Windows Services. e.g. SQL Server, SQL Server Agent, SQL Analysis Services, SQL Server Integration Services, etc. There will be a user for each and every service through which each service will run. That use is called the Service Account of that service. Mainly we categorize the Service account as below:
Local User Account: This user account is created in the server where SQL Server is installed; this account does not have access to network resources.
Local Service Account: This is a built-in windows account that is available for configuring services in windows. This account has permissions as same as accounts that are in the user's group, thus it has limited access to the resources on the server.
Local System Account: This is a built-in windows account that is available for configuring services in windows. This is a highly privileged account that has access to all resources in the server with administrator rights.
Network Service Account: This is a built-in windows account that is available for configuring services in windows. This has permission to access resources in the network under the computer account.
Domain Account: This account is a part of our domain that has access to network resources for which it is intended to have
permission. It is always advised to run SQL Server and related services under a domain account with minimum privilege need
to run SQL Server and its related services.
The Model database is basically used as a template when creating databases in SQL Server. Basically SQL Server takes a copy of Model database whenever a user tries to create a new database in SQL Server. This also means that if a user creates any tables, stored procedures, user defined data types or user defined functions within a Model database; then those objects will be available in every newly created database on that particular instance of SQL Server.
If the Model database is damaged or corrupted then SQL Server Service will not start up as it will not be able to create the
tempdb database.
We can use stats_date() function to check the statistics updated date.
No, it is not required. It’s not mandatory to grant Administrator permissions to the service account.
SQL Server Agent uses MSDB database to store information related to the configuration of SQL Server Agent Jobs, Job schedules, Alerts, Operators etc. MSDB also stores information related to configuration of Service Broker, Log Shipping, database backups and restore information, Maintenance Plan Configuration, Configuration of Database Mail, Policy Bases Information of SQL Server 2008 etc.
If the MSDB database is corrupted or damaged then scheduling information used by SQL Server Agent will be lost. This will
result in the failure of all scheduled activities.
An Active – Passive cluster is a failover cluster configured in a way that only one cluster node is active at any given time. The other node, called as Passive node is always online but in an idle condition, waiting for a failure of the Active Node, upon which the Passive Node takes over the SQL Server Services and this becomes the Active Node, the previous Active Node now being a Passive Node.
An Active – Active cluster is a failover cluster configured in a way that both the cluster nodes are active at any given point of
time. That is, one Instance of SQL Server is running on each of the nodes always; when one of the nodes has a failure, both the
Instances run on the only one node until the failed node is brought up (after fixing the issue that caused the node failure). The
instance is then failed over back to its designated node.
To update statistics for a particular table we can use below command.
UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN
The user through which we are installing SQL Server must have administrator permissions on the Windows server.
If you are running the setup as a local administrator, you require the following user rights for the setup to run successfully:
Local
Group Policy Object display name |
User
right |
Backup
files and directories |
SeBackupPrivilege |
Debug
Programs |
SeDebugPrivilege |
Manage
auditing and security log |
SeSecurityPrivilege |
The tempdb database is considered a global resource that all connections and all users can access. The tempdb database holds
user-created temporary database objects, such as temporary tables and temporary stored procedures. This database is alsoused heavily by the database engine to serialize intermediate results and to hold other temporary data and objects that are
generated during processing. Tempdb is always recreated from when the SQL Server service is first started. Because tempdb is
so heavily used by the system, many system-wide performance optimizations are necessarily focused on making tempdb as
efficient as possible.
In sql2005 we have the option of installing SQL in remaining nodes from the primary node, But in sql2008 we need to go
separately (Login to the both nodes) for installing SQL cluster
(SOS Scheduler)– The object that manages thread scheduling in SQL Server and allows threads to be exposed to the CPU
(described in sys.dm_os_schedulers). It does not control things but lets the workers work with each other and relies on their
cooperation (co-operative scheduling mode). Each scheduler /master (one per logical CPU) accepts new tasks and hands them
off to workers. SOS Scheduler allows one worker at a time to be exposed to the CPU.
This directory contains the common files used by all instances on a single computer e.g. SSMS, sqlcmd, BCP, DTExec, etc. These
are installed in the folder
The Resource database is a read only, hidden system database that contains all the SQL Server system objects such as sys.objects which are physically available only in the Resource database, even though they logically appear in the SYS schema of every database. The Resource Database does not contain any user data or any user metadata. By design, the Resource database is not visible under SQL Server Management Studio’s Object Explorer | Databases | System Databases Node.
The DBA shouldn’t rename or move the Resource Database file. If the files are renamed or moved from their respective
locations then SQL Server will not start. The other important thing to be considered is not to put the Resource Database files in
a compressed or encrypted NTFS file system folders as it will hinder the performance and will also possibly prevent upgrades.
Yes, we can change the Quorum setting after the Windows Cluster installation.
Scheduler
A schedule has three important component, Processor,Waiter List and Runnable queue.
A Processor is the actual processor which processes the thread one at a time.
Waiter List is the list of threads waiting for resources.
Runnable Queue is a queue of threads which has all the resource it need to process but waiting for its turn to get into the processor.
Scheduler put the thread in to Runnable Queue then move the thread to Processor and migrate to Waiter List. Each thread
keeps going through these three components until the thread works completes
An instance of the Database Engine is a copy of the sqlservr.exe executable that runs as an operating system service. Each
instance manages its own system databases and one or more user databases. An instance is a complete copy of an SQL Server
installation.
The physical file names of the Resource database are mssqlsystemresource.mdf and mssqlsystemresource.ldf. These files are
located in
No it’s not mandatory to configure MSDTC service to install SQL Server in Windows 2008 cluster. Installation will give you a
warning but will not stop the installation.
Preemptive Scheduling is when a computer process is interrupted and the CPU’s power is given over to another process with a higher priority. This type of scheduling occurs when a process switches from running state to a ready state or from a waiting state to a ready state.
Non-Preemptive Scheduling
Non-Preemptive Scheduling allows the process to run through to completion before moving onto the next task.
There are two types of Instances.
Each computer can run a maximum of 50 instances of the Database Engine. One instance can be the default instance.
The default instance has no name. If a connection request specifies only the name of the computer, the connection is made to the default instance.
A named instance is one where you specify an instance name when installing the instance.
A connection request must specify
both the computer name and instance name in order to connect to the instance.
There are three different types of objects stored in tempdb.
• Internal Objects:
1. Intermediate runs for sort.
2. Intermediate results for hash join and hash aggregates.
3. XML variables or other large object (LOB) data type variables. (text, image, ntext, varchar(max), varbinary(max))
4. Queries that need a spool to store intermediate results.
5. Keyset cursors to store the keys.
6. Static cursors to store a query result.
7. Service Broker to store messages in transit.
8. INSTEAD OF triggers to store data for internal processing.
9. DBCC CHECK internally uses a query that may need to spool intermediate results.
10. Query notification and event notification use Service Broker.
• Version Store:
1. Snapshot Isolation / Read Committed Snapshot Islotaion
2. Triggers (After Triggers). Instead of triggers doesn’t generate versions.
3. MARS (Multiple Active Result Sets)
4. Index Rebuilds
• User Objects:
1. User defined tables and indexes
2. Local and global temporary tables, bulk insert and BCP intermediate results
3. Index rebuilds with “SORT IN TEMPDB” option.
SQLIO is a generic tool for testing your disk subsystem. It allows you to specify whether to test random/sequential, data block sizes, queue depth, threading, etc. Despite its name, SQLIO is not part of SQL Server, yet it can be an invaluable tool for determining whether the disk drives slated to support SQL Server can meet the system’s I/O needs.
The tool lets you push the limits of your disk subsystem in order to determine the hardware’s I/O capacity and to identify performance-related issues before deploying SQL Server.
You can use the tool to test a variety of I/O types and sizes against various disk subsystems, including direct attached storage,
network attached storage, and storage area networks.
Service Pack is abbreviated as SP, a service pack is a collection of updates and fixes, called patches, for an operating system or a software program. Many of these patches are often released before the larger service pack, but the service pack allows for an easy, single installation.
Patch – Publicly released update to fix a known bug/issue
Hotfix – update to fix a very specific issue, not always publicly released
The MSDB database is the database with the backup and restores system tables. Here are the backup and restore system tables and their purpose:
backupfile – contains one row for each data file or log file backed up
backupmediafamily – contains one row for each media family
backupmediaset – contains one row for each backup media set
backupset – contains one row for each backup set
restorefile – contains one row for each restored file
restorefilegroup – contains one row for each restored filegroup
restorehistory – contains one row for each restore operation
SQLIOSim is a tool that tries to emulate the patterns that SQL Server would expose to your system. The sqliosim utility simulates SQL Server read, write, sort, backup, checkpoint and read-ahead patterns for both user and system activity.The tool creates sample data and log files in order to perform SQL Server reliability and integrity tests on the disk subsystem.
Because the utility runs independently of the SQL Server engine, you can use it before installing SQL Server in order to ensure that new hardware can handle your expected loads
SQL Server 2008 introduced a concept that’s called “Slipstream Installation”. This is a way to deploy a SQL Server instance with
all the needed Service pack as part of the installation. Everything will be installed in one go, hence there is no need to deploy
any other service packs on the installation.
ReportServer and ReportServerTempdb
New validation feature. With this feature, you can check that your system, storage, and network configuration is suitable for a cluster.
Support for GUID partition table (GPT) disks in cluster storage. GPT disks can have partitions larger than two terabytes and have built-in redundancy in the way partition information is stored, unlike master boot record (MBR) disks.
There are some key enterprise edition only performance benefits across RAM, Parallelism, Query Plans and DISK I/O that will lead to better performance on high end systems.
1. Lock Pages in Memory
2. Advanced Scanning (aka Merry-go-round scan)
3. Larger Read Ahead Buffering on Storage Devices
4. Large Page Extensions
5. Parallel index Operations
Step 1. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools >SQL Server Configuration Manager
Step 2. Go to SQL Server Configuration Manager > SQL Server Network Configuration >Protocols for <Instance Name>
Yes, It is very important to take system database backups except tempdb.
Cluster continues to work but failover will not happen in case of any other failure in the active node.
n SQL Server Enterprise Edition, the advanced scan feature allows multiple tasks to share full table scans. If the execution plan
of a Transact-SQL statement requires a scan of the data pages in a table and the Database Engine detects that the table is already
being scanned for another execution plan, the Database Engine joins the second scan to the first, at the current location of the
second scan. The Database Engine reads each page one time and passes the rows from each page to both execution plans. This
continues until the end of the table is reached.
Below are the methods using which we can get the port information.
Method 1: SQL Server Configuration Manager
Method 2: Windows Event Viewer
Method 3: SQL Server Error Logs
Method 4: sys.dm_exec_connections DMV
Method 5: Reading registry using xp_instance_regread
No, it’s not possible to take tempdb backup and it’s not required.
In Windows Server 2003, the Quorum disk resource is required for the Cluster to function. In your example, if the Quorum disk
suddenly became unavailable to the cluster then both nodes would immediately fail and not be able to restart the cluster
service.
A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of
additional writes and storage space to maintain the index data structure. Indexes are used to quickly locate data without
having to search every row in a database table every time a database table is accessed. Indexes can be created using one or
more columns of a database table.
FILESTREAM was introduced in SQL Server 2008 for the storage and management of unstructured data. The FILESTREAM feature
allows storing BLOB data (example: word documents, image files, music, and videos, etc) in the NT file system and ensures
transactional consistency between the unstructured data stored in the NT file system and the structured data stored in the
table.
No, it’s not possible to rename any system database.
Groups that contain an IP address resource and a network name resource (along with other resources) are published to clients
on the network under a unique server name. Because these groups appear as individual servers to clients, they are called virtual
servers. Users access applications or services on a virtual server the same way they access applications or services on a physical
server. They do not need to know that they are connecting to a cluster and have no knowledge of which node they are connected
to.
SQL Server error logs are stored in the below location.
Instance Root Directory\MSSQL\Log
Default owner of system databases is sa, We can’t change the default owner of master, model, tempdb and distributor
databases.
In the Cluster Administrator, rick click on the SQL Server Group and from the popup menu item choose Take Offline.
By default, there are seven SQL Server error logs; Errorlog and Errorlog.1 through Errorlog.6. The name of the current, most
recent log is Errorlog with no extension. The log is re-created every time that you restart SQL Server. When the Errorlog file is
re-created, the previous log is renamed to Errorlog.1, and the next previous log (Errorlog.1) is renamed to Errorlog.2, and so on.
Errorlog.6 is deleted.
• Preallocate space for all tempdb files by setting the file size to a value large enough to accommodate the typical workload in the environment. This prevents tempdb from expanding too frequently, which can affect performance. The tempdb database should be set to autogrow, but this should be used to increase disk space for unplanned exceptions.
• Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability. However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary. Note that a dual-core CPU is considered to be two CPUs.
• Make each data file the same size; this allows for optimal proportional-fill performance.
• Put the tempdb database on a fast I/O subsystem. Use disk striping if there are many directly attached disks.
• Put the tempdb database on disks that differ from those that are used by user databases.
After adding the shared disk in the storage, we can add disk to the respective SQL Server Group.
A non-clustered index can be defined on a table or view with a clustered index or on a heap. Each index row in the non-clustered index contains the non-clustered key value and a row locator. This locator points to the data row in the clustered
index or heap having the key value. The rows in the index are stored in the order of the index key values, but the data rows are
not guaranteed to be in any particular order unless a clustered index is created on the table.
Yes, it is possible to change the no. of Error logs retention.
We can follow the below steps to change the Error log file retention.
SQL Server backup and restore operations occur within the context of the recovery model of the database. Recovery models
are designed to control transaction log maintenance. A recovery model is a database property that controls how transactions
are logged, whether the transaction log requires (and allows) backing up, and what kinds of restore operations are available.
Maximum 16.
A unique index ensures that the index key contains no duplicate values and therefore every row in the table or view is in some
way unique. Uniqueness can be a property of both clustered and non-clustered indexes.
Securable are the resources to which the SQL Server Database Engine authorization system regulates access. For example, a
table is a securable. Some securable can be contained within others, creating nested hierarchies called “scopes” that can
themselves be secured. The securable scopes are server, database, and schema.
Three recovery models exist: simple, full, and bulk-logged. Typically, a database uses the full recovery model or simple
recovery model. A database can be switched to another recovery model at any time.
A failover cluster is a group of independent computers that work together to increase the availability of applications and services.
The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another
node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.
A column-store index stores data in a column-wise (columnar) format, unlike the traditional B-tree structures used for
clustered and non-clustered row store indexes, which store data row-wise (in rows). A column-store index organizes the data
in individual columns that are joined together to form the index. This structure can offer significant performance gains for
queries that summarize large quantities of data, the sort typically used for business intelligence (BI) and data warehousing.
Securable scope: Server –The server securable scope contains the following securable:
Securable scope: Database –The database securable scope contains the following securable:
Securable scope: Schema –The schema securable scope contains the following securable:
We can setup the recovery model of model database as simple to make sure that newly created database’s recovery is
automatically configured as simple.
Services and applications are managed as single units for configuration and recovery purposes. If a resource depends on another
resource, both resources must be a member of the same service or application. For example, in a file share resource, the service
or application containing the file share must also contain the disk resource and network resources (such as the IP address and
NetBIOS name) to which clients connect to access the share. All resources within a service or application must be online on the
same node in the cluster.
A non-clustered index that is extended to include non-key columns in addition to the key columns.
SQL Server 2005 introduced the concept of database schemas and the separation between database objects and ownership by
users. An object owned by a database user is no longer tied to that user. The object now belongs to a schema – a container
that can hold many database objects. Schema is a collection of database objects that are owned by a single principle and form
a single namespace
A Database backup is a copy of SQL Server data that can be used to restore and recover the data in case of any failure. A backup
of SQL Server data is created at the level of a database or one or more of its files or filegroups. There is another option to take
Transaction Log backups when the database recovery model of a database is Full.
Service account needs create object permissions in the Active Directory.
An optimized non-clustered index, especially suited to cover queries that select from a well-defined subset of data. It uses a
filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce
index maintenance costs, and reduce index storage costs compared with full-table indexes.
Bulk Admin: Members of this role can perform Bulk Insert operations on all the databases.
DB Creator: Members of this role can Create/Alter/Drop/Restore a database.
Disk Admin: Members can manage disk files for the server and all databases. They can handle backup devices.
Process Admin: Members of this role can manage and terminate the processes on the SQL Server.
Server Admin: Members of this role can change Server-wide configurations and shut down SQL Server instances.
Setup Admin: Members of this role can Add/Remove Linked Servers.
Security Admin: Members of this role can create/manage Logins, including changing and resetting passwords as needed, and managing GRANT, REVOKE and DENY permissions at the server and database levels.
SysAdmin: Members of this role have Full Control on the instance and can perform any task.
Public: Public is another role just like Fixed Server Roles, that is by default granted to every login (Windows/SQL)
We have below type of backup available in SQL Server 2012.
• Full Backup
• Differential Backup
• Transaction Log Backup
• Copy-Only Backup
• File or Filegroup Backup
SQL Services should always be in manual mode in case of cluster because these are managed by the Cluster service and it’s
taken online on its respective owner node based on the failover.
A spatial index provides the ability to perform certain operations more efficiently on spatial objects (spatial data) in a column
of the geometry data type. The spatial index reduces the number of objects on which relatively costly spatial operations need
to be applied.
Dynamic management views and functions return server state information that can be used to monitor the health of a server
instance, diagnose problems, and tune performance.
There are two types of dynamic management views and functions:
Server-scoped dynamic management views and functions. These require VIEW SERVER STATE permission on the server.
Database-scoped dynamic management views and functions. These require VIEW DATABASE STATE permission on the
database.
WITH password option is not available any more with SQL Server 2012 onwards.
Distributed lock management (DLM): Distributed lock management (DLM) enables two servers to access the same physical disk
at the same time without corrupting the data. If a device is updating a particular file or piece of data, the device gets locked so
that another controller can’t seize ownership and overwrite the data. NT does not currently support DLM, so disks are dedicated
to one node or the other.
A shredded, and persisted, representation of the XML binary large objects (BLOBs) in the xml data type column.
The VIEW DEFINITION permission lets a user see the metadata of the securable on which the permission is granted. However, VIEW DEFINITION permission does not confer access to the securable itself. For example, a user that is granted only VIEW DEFINITION permission on a table can see metadata related to the table in the sys.objects catalog view. However, without additional permissions such as SELECT or CONTROL, the user cannot read data from the table.
The VIEW DEFINITION permission can be granted on the following levels:
Transaction Log backups are possible in Full and Bulk Logged recovery model.
LooksAlive: Verifies that the SQL Server service runs on the online node every 5 seconds by default.
A special type of token-based functional index that is built and maintained by the Microsoft Full-Text Engine for SQL Server. It
provides efficient support for sophisticated word searches in character string data.
Guest user permits access to a database for any logins that are not mapped to a specific database user. The guest user cannot be dropped but it can be disabled by revoking the CONNECT permission. The recommendation is not valid for master, msdb, and tempdb system databases. If the Guest user is disabled in msdb system database, it may cause some issues.
The distribution database is also a system database and more information about the Guest User in the distribution database can be
found below. It is recommended to disable a guest user in every database as a best practice for securing the SQL Server.
• Bulk import operations (bcp, BULK INSERT, and INSERT… SELECT). For more information about when bulk import into a table is minimally logged.
• SELECT INTO operations.
• Partial updates to large value data types, using the .WRITE clause in the UPDATE statement when inserting or appending new data. Note that minimal logging is not used when existing values are updated.
• WRITETEXT and UPDATETEXT statements when inserting or appending new data into the text, ntext, and image data type columns. Note that minimal logging is not used when existing values are updated.
• CREATE INDEX operations (including indexed views).
• ALTER INDEX REBUILD or DBCC DBREINDEX operations.
• DROP INDEX new heap rebuild (if applicable).
IsAlive: Verifies that SQL Server accepts connections by executing sp_server_diagnostics. This health detection logic
determines if a node is down and the passive node then takes over the production workload.
A clustered index determines the order in which the rows of the table will be stored on disk – and it actually stores row level data in the leaf nodes of the index itself. A non-clustered index has no effect on which the order of the rows will be stored.
Using a clustered index is an advantage when groups of data that can be clustered are frequently accessed by some queries. This speeds up retrieval because the data lives close to each other on disk. Also, if data is accessed in the same order as the clustered index, the retrieval will be much faster because the physical data stored on disk is sorted in the same order as the index.
A clustered index can be a disadvantage because any time a change is made to a value of an indexed column, the subsequent possibility of re-sorting rows to maintain order is a definite performance hit.
A table can have multiple non-clustered indexes. But, a table can have only one clustered index.
Non clustered indexes store both a value and a pointer to the actual row that holds that value. Clustered indexes don’t need
to store a pointer to the actual row because of the fact that the rows in the table are stored on disk in the same exact order as
the clustered index – and the clustered index actually stores the row-level data in its leaf nodes.
db_datareader: The db_datareader role has the ability to run a SELECT statement against any table or view in the database.
db_datawriter: The db_datawriter role has the ability to modify via INSERT, UPDATE, or DELETE data in any table or view in the database.
db_denydatareader: The db_denydatareader role is the exact opposite of the db_datareader role: instead of granting SELECT permissions on any database object, the db_denydatareader denies SELECT permissions.
db_denydatawriter: db_denydatawriter role serves to restrict permissions on a given database. With this role, the user is preventing from modifying the data on any data via an INSERT, UPDATE, or DELETE statement
We can use RESTORE VERIFY ONLY command to make sure that the Database backups are restorable.
Super key is a set of one or more than one keys that can be used to identify a record uniquely in a table. Example: Primary key,
Unique key, Alternate key are subset of Super Keys.
The deny roles are an extra safeguard that you can use to make sure that certain logins or groups will never have the type of access that is specifically denied under the role. With nesting of groups in Windows and multiple role assignments sometimes individuals inadvertently end up with excessive permissions. This is just another level that can be applied to lessen these accidental cracks in the permissions hierarchy.
msdb database contains information about the backup restore.
A Candidate Key is a set of one or more fields/columns that can identify a record uniquely in a table. There can be multiple
Candidate Keys in one table. Each Candidate Key can work as Primary Key.
A database user for which the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the instance. Such a user is said to be an orphaned user of the database on that server instance.
Here are the backup and restore system tables and their purpose:
• backupfile – contains one row for each data file or log file backed up
• backupmediafamily – contains one row for each media family
• backupmediaset – contains one row for each backup media set
• backupset – contains one row for each backup set
• restorefile – contains one row for each restored file
• restorefilegroup – contains one row for each restored filegroup
• restorehistory – contains one row for each restore operation
Validation test is a mechanism of verifying that all the components which are participating in the Windows cluster are fine and
failover is happening between the nodes.
An Alternate key is a key that can be work as a primary key. Basically it is a candidate key that currently is not primary key.
This will lists the orphaned users:
EXEC sp_change_users_login 'Report'
If you already have a login id and password for this user, fix it by doing:
EXEC sp_change_users_login 'Auto_Fix', 'user'
If you want to create a new login id and password for this user, fix it by doing:
EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password'
DCM page contains information about the extent which are changed after the Full backup. Diff. backup process reads
information about the changed extents and those extents are added in the differential backup.
Primary key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It cannot accept
null, duplicate values. Only one Candidate Key can be Primary Key.
To hide an instance of the SQL Server Database Engine
Backup with COPY ONLY option can be used in such a situation.
These reports are automatically stored for you in C:\Windows\Cluster\Reports as MHTML files.
Unique key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It is like Primary
key but it can accept only one null value and it cannot have duplicate values.
We can verify the backup history of the database that backups are happening or not.
backupset table in msdb
Foreign Key is a field in database table that is Primary key in another table. It can accept multiple null, duplicate values.
Example: We can have a DeptID column in the Employee table which is pointing to DeptID column in a department table
where it a primary key.
Transparent Data Encryption (TDE) is a feature introduced in SQL Server 2008 and available in later versions for bulk encryption at the database file level (data file, log file and backup file) i.e. the entire database at rest. Once enabled for a database, this feature encrypts data into pages before it is written to the disk and decrypts when read from the disk. The best part of this feature is, as its name implies, it’s completely transparent to your application. This means literally no application code changes (only administrative change to enable it for a database) are required and hence no impact on the application code\functionalities when enabling TDE on a database being referenced by that application.
This column tells us about the backup type.
Backup type. Can be:
• D = Database
• I = Differential database
• L = Log
• F = File or filegroup
• G =Differential file
• P = Partial
• Q = Differential partial
Yes, definitely there will be downtime when SQL Server failover from one node to another.
By default Primary Key column is created with a Cluster Index.
Transparent Data Encryption (TDE) is a feature introduced in SQL Server 2008 and available in later versions for bulk encryption at the database file level (data file, log file and backup file) i.e. the entire database at rest. Once enabled for a database, this feature encrypts data into pages before it is written to the disk and decrypts when read from the disk. The best part of this feature is, as its name implies, it’s completely transparent to your application. This means literally no application code changes (only administrative change to enable it for a database) are required and hence no impact on the application code\functionalities when enabling TDE on a database being referenced by that application.
The user must be a member of either of the below roles
Backup:
• sysadmin – fixed server role
• db_owner – fixed database role
• db_backupoperator – fixed database role
No it’s not possible in SQL Server Cluster feature.
By default Unique Key column is created with a Cluster Index.
The Service Master Key is the root of the SQL Server encryption hierarchy. It is generated automatically the first time it is needed to encrypt another key. By default, the Service Master Key is encrypted using the Windows data protection API and using the local machine key. The Service Master Key can only be opened by the Windows service account under which it was created or by a principal with access to both the service account name and its password.
Yes – We can enable the trace flag 3226.
No, it’s not possible. SQL Server 2012 and all previous versions of SQL Server require databases be created on clustered
resources. Internal drives or drives which are not part of the cluster group cannot hold user databases.
A table can have one of the following index configurations:
Symmetric Key – In Symmetric cryptography system, the sender and the receiver of a message share a single, common key that is used to encrypt and decrypt the message. This is relatively easy to implement, and both the sender and the receiver can encrypt or decrypt the messages.
Asymmetric Key – Asymmetric cryptography, also known as Public-key cryptography, is a system in which the sender and the receiver of a message have a pair of cryptographic keys – a public key and a private key – to encrypt and decrypt the message. This is a relatively complex system where the sender can use his key to encrypt the message but he cannot decrypt it. The receiver, on the other hand, can use his key to decrypt the message but he cannot encrypt it. This intricacy has turned it into a resource-intensive process
An enhancement introduced in SQL Server 2012 SP1 Cumulative Update 2 is enable backup and restore from the Windows
Azure Blob storage service from SQL Server using TSQL
With the introduction of SQL Server 2012 Microsoft officially supports local disk TempDB in SQL Server cluster configurations.
CREATE INDEX name_index
ON Employee (Employee_Name, Department)
BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password'
The following are limitations specific to this release:
• The maximum backup size supported is 1 TB.
• In this implementation, you can issue backup or restore statements by using TSQL or SMO. A backup to or restoring
from the Windows Azure Blob storage service by using SQL Server Management Studio Backup or Restore wizard is not
currently enabled.
No it is not possible.
An index is structured by the SQL Server Index Manager as a balanced tree (or Btree). A B-tree is similar to an upside-down tree, means with the root of the tree at the top, the leaf levels at the bottom, and intermediate levels in between.
Each object in the tree structure is a group of sorted index keys called an index page.
All search requests begin at the root of a B-tree and then move through the tree to the appropriate leaf level.
SQL Server 2005 contains configuration tools such as a system stored procedure called sp_configure or SQL Server Surface Area Configuration tool (for services and features) in order to enable/disable optional features as needed. Those features are usually installed as disabled by default. Here is the list of the features that can be enabled using the tool:
Some typical examples include the following:
• You request a file backup of specific files, but one of the files is not online. The operation fails. To back up the online files, you can omit the offline file from the file list and repeat the operation.
• You request a partial backup, but a read/write filegroup is offline. Because all read/write filegroups are required for a partial backup, the operation fails.
• We request a full database backup, but one filegroup of the database is offline. Because all filegroups are implicitly
included in a full database backup, this operation fails.To back up this database, you can use a file backup and specify
only the filegroups that are online.
SMB stands for Server Message Block file server which can be used as a storage option starting SQL Server 2012 to store
system databases (Master, Model, MSDB, and TempDB), and Database Engine user databases .
SQL Server is divided into two main engines: the Relational Engine and the Storage Engine. The Relational Engine contains below components:
Operations that cannot run during a database backup or transaction log backup include the following:
• File-management operations such as the ALTER DATABASE statement with either the ADD FILE or REMOVE FILE options.
• Shrink database or shrink file operations. This includes auto-shrink operations.
• If you try to create or delete a database file while a backup operation is in progress, the create or delete operation fails.
Select serverproperty(‘ComputerNamePhysicalNetBIOS’)
SQL Server supports three types of checksums: a checksum on pages, a checksum in log blocks, and a backup checksum. When generating a backup checksum, BACKUP verifies that the data read from the database is consistent with any checksum or torn-page indication that is present in the database.
The BACKUP statement optionally computes a backup checksum on the backup stream; if page-checksum or torn-page
information is present on a given page, when backing up the page, BACKUP also verifies the checksum and torn-page status and
the page ID, of the page. When creating a backup checksum, a backup operation does not add any checksums to pages. Pages
are backed up as they exist in the database, and the pages are unmodified by backup.
AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level
alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a
set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases,known as availability databases that fail over together. An availability group supports a set of read-write primary databases and
one to four sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only
access and/or some backup operations.
No, we cannot insert data if Clustered Index is disabled because Clustered Indexes are in fact original tables which are
physically ordered according to one or more keys (Columns).
SQL Server Setup generates a configuration file named ConfigurationFile.ini, based upon the system default and run-time
inputs. The ConfigurationFile.ini file is a text file which contains the set of parameters in name/value pairs along with
descriptive comments. Many of the parameter names correspond to the screens and options which you see while installing
SQL Server through the wizard. We can then use the configuration file to install SQL Server with the same configuration
instead of going through each of the installation screens.
Backup is an important component of a sound disaster recovery strategy. Here are some best practices you can follow to ensure you have a good backup in place:
• Make sure you are not storing your backups in the same physical location as the database files. When your physical drive goes bad, you should be able to use the other drive or remote location that stored the backups in order to perform a restore. Keep in mind that you could create several logical volumes or partitions from a same physical disk drive. Carefully study the disk partition and logical column layouts before choosing a storage location for the backups.
• Make sure you have a proper backup schedule established according to the needs of the application and business requirements. As the backups get old, the risk of data loss is higher unless you have a way to regenerate all the data till the point of failure.
• Make sure to actually restore the backups on a test server and verify that you can restore with all the options and conditions you need to use during a planned or un-planned downtime.
• Use the verification options provided by the backup utilities [BACKUP TSQL command, SQL Server Maintenance Plans, your backup software or solution, etc].
• Use advanced features like BACKUP CHECKSUM to detect problems with the backup media itself.
A container for a set of databases, availability databases, that fails over together.
Clustered Index: Clustered index defines the way in which data is ordered physically on the disk. And there can only be one
way in which you can order the data physically. Hence there can only be one clustered index per table.
We can find the configuration file in the C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log folder. There will a
subfolder based on a timestampof when the SQL Server 2012 installation was done.
Yes we can save multiple copies of database backup in a single file.
A database that belongs to an availability group. For each availability database, the availability group maintains a single read-write copy (the primary database) and one to four read-only copies (secondary databases).
We can have 1 cluster index and 999 non cluster index on a table.
Based on the selected components while doing the installation we will find respective service to each component in the Windows Services. e.g. SQL Server, SQL Server Agent, SQL Analysis Services, SQL Server integration Services etc. There will be a user for each and every service through which each service will run. That use is called Service Account of that service.
Mainly we categorize the Service account as below:
Local User Account: This user account is created in the server where SQL Server is installed; this account does not have access to network resources.
Local Service Account: This is a builtin windows account that is available for configuring services in windows. This account has permissions as same as accounts that are in the users group, thus it has limited access to the resources in the server.
Local System Account: This is a builtin windows account that is available for configuring services in windows. This is a highly privileged account that has access to all resources in the server with administrator rights.
Network Service Account: This is a builtin windows account that is available for configuring services in windows. This has permissions to access resources in the network under the computer account.
Domain Account: This account is a part of our domain that has access to network resources for which it is intended to have
permission. It is always advised to run SQL Server and related services under a domain account with minimum privilege need
to run SQL Server and its related services
There are many tools available in the market for SQL server backups like
• SQL Litespeed (Dell)
• SQL Backup Pro (Redgate)
• SQL Safe Backup (Idera)
SQL Server Enterprise Edition
Maximum of 16 columns can be a part of an index.
Three copies are allowed in a Mirror backup apart from the original copy.
A heap is a table without a clustered index. Data is stored in the heap without specifying an order. Usually data is initially
stored in the order in which is the rows are inserted into the table, but the Database Engine can move data around in the heap
to store the rows efficiently; so the data order cannot be predicted.
There could be multiple reasons like:
• Permissions issues if the backups are configured to be taken on a share location
• Backup file used by the tape backups due to which backup process is not able to overwrite the backup file.
• Full backup is not taken before initiating a Diff. of Transaction log backup
• Not enough space available on the target location
Total 5-1 Primary and up to 4 Secondaries.
No, it is not required. It’s not mandatory to grant Administrator permissions to the service account.
Recovery Time Objective (RTO) is the amount of time which data or hardware is desired to be restored after a data corruption
or hardware failure.
Up to 10 availability groups is the recommendation, but it’s not enforced
This directory contains the common files used by all instances on a single computer e.g. SSMS, sqlcmd, bcp, DTExec etc. These
are installed in the folder
Recovery Point Objective (RPO) describes a point in time that data can be restored from. For instance, if there is data corruption,
Data loss or unavailability, at what point in time can a valid copy of the data be restored from? RPO is usually measured as
minutes, hours, days, weeks, etc…
Up to 100 is the recommendation, but it’s not enforced
Data is stored in the leaf-level pages of Index. The percentage of space to be filled with data in a leaf level page is decided by
fill factor. The remaining space left is used for future growth of data in the page. Fill factor is a number from 1 to 100. Its default
value is 0, which is same as 100. So when we say fill factor is 70 means, 70% of space is filled with data and remaining 30% is
vacant for future use. So higher the fill factor, more data is stored in the page. Fill factor setting is applied when we create/rebuild
index.
An instance of the Database Engine is a copy of the sqlservr.exe executable that runs as an operating system service. Each
instance manages its own system databases and one or more user databases. An instance is a complete copy of an SQL Server
installation.
TDE provides the ability to encrypt an entire database and to have the encryption be completely transparent to the
applications that access the database. TDE encrypts the data stored in both the database’s data file (.mdf) and log file (.ldf)
using either Advanced Encryption Standard (AES) or Triple DES (3DES) encryption. In addition, any backups of the database are
encrypted. This protects the data while it’s at rest as well as provides protection against losing sensitive information if the
backup media were lost or stolen.
The availability mode is a property of each availability replica. The availability mode determines whether the primary replica
waits to commit transactions on a database until a given secondary replica has written the transaction log records to disk
(hardened the log).
Below are the criteria you need to consider while choosing fill factor.
There are two types of Instances.
• Default instance
• Named Instance Each computer can run maximum of 50 instances of the Database Engine. One instance can be the default instance.
The default instance has no name. If a connection request specifies only the name of the computer, the connection is made to the default instance.
A named instance is one where you specify an instance name when installing the instance. A connection request must specify both the computer name and instance name in order to connect to the instance.
TDE requires SQL Server 2012 Enterprise edition. It’s not available in SQL Server 2012 Standard or Business Intelligence editions.
TDE is also available in SQL Server 2008 and SQL Server 2008 R2 Datacenter and Enterprise editions.
No we don’t need SQL Server Cluster instances to configure Always ON.
Server level: A generic fill factor setting is applied at the server level for all table/index. We can set a server level default fill factor by using sp_configure with a parameter as below script.
EXEC sys.sp_configure 'fill factor', 90
GO
RECONFIGURE WITH OVERRIDE
GO
At Index/Table level: While creating/rebuilding index we can set a specific fill factor. We can use below script to set fill factor while rebuilding index.
ALTER INDEX Index_name ON [SchemaName].[TableName]
REBUILD WITH (FILLFACTOR = 80);
GO
Yes, we can install multiple instances on the same disk drive because each installation creates its own folder with the below format.
MSSQL11.INSTANCENAME
Yes, some performance overhead is involved in using TDE. The encryption and decryption process do require additional CPU cycles. The overhead for using TDE ranges from about 3 percent to 30 percent, depending on the type of workload.
SQL Server instances with low I/O and low CPU usage will have the least performance impact. Servers with high CPU usage
will have the most performance impact.
This option specifies index padding. When turned ON, it uses the percentage specified by FILLFACTOR is applied to the intermediate-level and root level pages of an index.
PAD_INDEX = { ON | OFF } When ON, the percentage of free space specified by FILLFACTOR is applied to the intermediate level
pages of the index. When OFF or a FILLFACTOR value is not specified, the intermediate level pages are filled to near capacity
leaving enough space for at least one row of the maximum size the index can have, considering the set of keys on the
intermediate pages. The default is OFF.
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
Default collation: SQL_Latin1_General_CP1_CI_AS
A master key is a symmetric key that is used to create certificates and asymmetric keys.
Always ON supports below availability modes.
Indexes can be defined on views. Indexed views are a method of storing the result set of the view in the database, thereby
reducing the overhead of dynamically building the result set. An indexed view also automatically reflects modifications made to
the data in the base tables after the index is created.
• Performs real-time I/O encryption and decryption of the data and log files
• Encrypts the Entire Database in rest
• No architectural changes needed
• No application code changes are required and the user experience is the same
• Easy to implement
• DBAs can still see the data
Asynchronous-commit mode
An availability replica that uses this availability mode is known as an asynchronous-commit replica. Under asynchronous-commit mode, the primary replica commits transactions without waiting for acknowledgement that an asynchronous-commit secondary replica has hardened the log. Asynchronous-commit mode minimizes transaction latency on the secondary databases but allows them to lag behind the primary databases, making some data loss possible
Synchronous-commit mode.
An availability replica that uses this availability mode is known as a synchronous-commit replica. Under synchronous-commit
mode, before committing transactions, a synchronous-commit primary replica waits for a synchronous-commit secondary
replica to acknowledge that it has finished hardening the log. Synchronous-commit mode ensures that once a given secondary
database is synchronized with the primary database, committed transactions are fully protected. This protection comes at the
cost of increased transaction latency.
Index fragmentation is an “expected” and “unavoidable” characteristic of any OLTP environment. Fragmentation is defined as
any condition which causes more than the optimal amount of disk I/O to be performed in accessing a table, or causes the disk
I/Os that are performed to take longer than they optimally would.
Service Pack is abbreviated as SP, a service pack is a collection of updates and fixes, called patches, for an operating system or a software program. Many of these patches are often released before the larger service pack, but the service pack allows for an easy, single installation.
Patch – Publicly released update to fix a known bug/issue
Hotfix – update to fix a very specific issue, not always publicly released
• Not granular – Cannot just encrypt specific tables/columns
• Not good for high CPU bottleneck servers
• Not protected through communication/networks
The availability replica that makes the primary databases available for read-write connections from clients and, also, sends
transaction log records for each primary database to every secondary replica.
SQL Server 2012 Service Pack 2 (SP2)
Version: 11.0.5058.0
Release Date: 6/10/2014
MAXTRANSFERSIZE : specifies the largest unit of transfer in bytes to be used between SQL Server and the backup media. The
possible values are multiples of 64 KB ranging up to 4194304 bytes (4 MB). The default is 1 MB.
An availability replica that maintains a secondary copy of each availability database, and serves as a potential failover targets
for the availability group. Optionally, a secondary replica can support read-only access to secondary databases can support
creating backups on secondary databases.
Various Types of Index Fragmentation
Steps to install Service pack in Production environments:
1. First of all raise a change order and get the necessary approvals for the downtime window. Normally it takes around 45-60 minutes to install Service pack if there are no issues.
2. Once the downtime window is started, take a full backup of the user databases and system databases including the Resource database.
3. List down all the Startup parameters, Memory Usage, CPU Usage etc and save it in a separate file.
4. Install the service pack on SQL Servers.
5. Verify all the SQL Services are up and running as expected.
6. Validate the application functionality.
Note: There is a different approach to install Service pack on SQL Server cluster instances. That will be covered in SQL Server
cluster.
BUFFERCOUNT specifies the total number of I/O buffers to be used for the backup operation. The total space that will be
used by the buffers is determined by: buffercount * maxtransfersize.
A server name to which clients can connect in order to access a database in a primary or secondary replica of an AlwaysOn
availability group. Availability group listeners direct incoming connections to the primary replica or to a read-only secondary
replica.
When pages are less than fully used, the part of each page that is unused constitutes a form of fragmentation, since the table’s
or index’s rows are no longer packed together as tightly as they could be. This is known as Internal Fragmentation.
No, it’s not mandatory to restart Windows server after installing SQL Server service pack but it is always a good practice to do
so.
A continuous sequence of log backups is called a log chain. A log chain starts with a full backup of the database. Usually, a new log chain is only started when the database is backed up for the first time or after the recovery model is switched from simple recovery to full or bulk-logged recovery.
Unless you choose to overwrite existing backup sets when creating a full database backup, the existing log chain remains intact.
With the log chain intact, you can restore your database from any full database backup in the media set, followed by all
subsequent log backups up through your recovery point. The recovery point could be the end of the last log backup or a specific
recovery point in any of the log backups.
The AlwaysOn Availability Groups active secondary capabilities include support for read-only access to one or more secondary
replicas (readable secondary replicas). A readable secondary replica allows read-only access to all its secondary databases.
However, readable secondary databases are not set to read-only. They are dynamic. A given secondary database changes as
changes on the corresponding primary database are applied to the secondary database.
It usually occurs due to -
SQL Server 2008 introduced a concept that’s called “Slipstream Installation”. This is a way to deploy a SQL Server instance with
all the needed Service pack as part of the installation. Everything will be installed in one go, hence there is no need to deploy
any other service packs on the installation.
No, Full or Differential backup do not clear Transaction logs.
Directing read-only connections to readable secondary replicas provides the following benefits:
In addition, readable secondaries provide robust support for read-only operations, as follows:
SQL Server uses 8KB Pages to store data on disk. When a clustered index is created on a table, SQL Server creates a b-tree data
structure for the index and links all data pages at the leaf level of the tree in a logical order by using a doubly linked list. Logical fragmentation occurs when the pages in this doubly linked list are not contiguous in the index, meaning that indexes have pages
in which the logical ordering of pages, which is based on the key value, does not match the physical ordering inside the data file.
The procedure to install SQL Server instance through command line using ConfigurationFile.ini file in Quite mode is known as
Silent installation.
Yes, it is possible in case when you do not take Full backup of the database for months and change in the databases grow
more than the size of the Full backup.
We can have up to 2 synchronous replicas, but we are not required to use any. We could run all Secondaries in Async mode if
desired
This could happen due to -
SQL Server 2008 introduced a concept that’s called “Slipstream Installation”. This is a way to deploy a SQL Server instance with
all the needed Service pack as part of the installation. Everything will be installed in one go, hence there is no need to deploy
any other service packs on the installation.
Yes, It is mandatory to take a Full backup of the database after switching the recovery model of the database to initiate the log
chain. Otherwise Diff. or Transaction logs will fail.
Yes. An active secondary can be used to offload read-only queries from the primary to a secondary instance in the availability
group.
Extent fragmentation occurs when the extents of a table or index are not contiguous with the database leaving extents from
one or more indexes intermingled in the file.
The procedure to install SQL Server instance through command line using ConfigurationFile.ini file in Quite mode is known as
Silent installation.
We have below options to deal with the over growing transaction log file:
Freeing disk space so that the log can automatically grow.
• Backing up the log. • Adding a log file on a separate disk drive.
• Increasing the size of a log file
• killing a long-running transaction
Yes. An active secondary can be used for some types of backups
This can occur due to:
SQL Server default instance by default listen on 1433 port.
Database recovery model deals with the retention of the transaction log entries. Database recovery model decides if transaction log backups need to be triggered on a regular basis in order to keep the transaction log small or the Transaction logs will be truncated automatically.
• Simple – Committed transactions are removed from the log when the check point process occurs.
• Bulk Logged – Committed transactions are only removed when the transaction log backup process occurs.
• Full – Committed transactions are only removed when the transaction log backup process occurs.
We can use sys.dm_db_index_physical_stats DMF
Below are the methods using which we can get the port information.
Method 1: SQL Server Configuration Manager
Method 2: Windows Event Viewe
Method 3: SQL Server Error Logs Method 4: sys.dm_exec_connections DMV
Method 5: Reading registry using xp_instance_regread
SQL Server 2012 SP1 CU2, enables SQL Server backup and restore directly to the Windows Azure Blob service. Backup to cloud
offers benefits such as availability, limitless geo-replicated off-site storage, and ease of migration of data to and from the
cloud. In this release, you can issue BACKUP or RESTORE statements by using tsql or SMO. Back up to or restore from the
Windows Azure Blob storage service by using SQL Server Management Studio Backup or Restore Wizard is not available in this
release
Yes, we can take transaction log backups on the secondary replicas without COPY_ONLY option.
DMF “sys.dm_db_index_physical_stats ” has three modes -
1. DETAILED – reads all data and index pages. Be careful with using this options since it causes the entire index be read into memory and may result in IO/Memory issues
2. SAMPLED: reads 1% of the pages if more than 10,000 pages
3. LIMITED: only reads the parent level of b-tree (same as DBCC SHOWCONTIG WITH FAST). Limited option doesn’t report
page density, since it does not read the leaf level pages
FILESTREAM was introduced in SQL Server 2008 for the storage and management of unstructured data. The FILESTREAM feature
allows storing BLOB data (example: word documents, image files, music and videos etc) in the NT file system and ensures
transactional consistency between the unstructured data stored in the NT file system and the structured data stored in the
table.
A SQL Server credential is an object that is used to store authentication information required to connect to a resource outside
of SQL Server. Here, SQL Server backup and restore processes use credential to authenticate to the Windows Azure Blob
storage service. The Credential stores the name of the storage account and the storage account access key values. Once the
credential is created, it must be specified in the WITH CREDENTIAL option when issuing the BACKUP/RESTORE statements.
Within the context of a session between the primary replica and a secondary replica, the primary and secondary roles are
potentially interchangeable in a process known as failover. During a failover the target secondary replica transitions to the
primary role, becoming the new primary replica. The new primary replica brings its databases online as the primary databases,
and client applications can connect to them. When the former primary replica is available, it transitions to the secondary role,
becoming a secondary replica. The former primary databases become secondary databases and data synchronization resumes.
SQL Server error logs are stored in the below location.
Instance Root Directory\MSSQL\Log
• Flexible, reliable, and limitless off-site storage: Storing your backups on Windows Azure Blob service can be a convenient, flexible, and easy to access off-site option.
• No overhead of hardware management
• Cost Benefits: Pay only for the service that is used. Can be cost-effective as an off-site and backup archive option.
Three forms of failover exist—automatic, manual, and forced (with possible data loss). The form or forms of failover supported
by a given secondary replica depends on its availability mode,
By default, there are seven SQL Server error logs; Errorlog and Errorlog.1 through Errorlog.6. The name of the current, most
recent log is Errorlog with no extension. The log is re-created every time that you restart SQL Server. When the Errorlog file is
re-created, the previous log is renamed to Errorlog.1, and the next previous log (Errorlog.1) is renamed to Errorlog.2, and so on.
Errorlog.6 is deleted.
This job is failing because the we did not take a full database backup of the newly created database. We need to a full backup
of a database to initiate the log chain.
SQL Server 2008 onwards we can use REBUILD command to remove fragmentation from heap.
ALTER TABLE…REBUILD (SQL 2008+).
Yes it is possible to change the no. of Error logs retention. We can follow the below steps to change the Error log file retention.
1. Open SQL Server Management Studio and then connect to SQL Server Instance
2. InObject Explorer, ExpandManagement Node and then right click SQL Server Logs and click Configure as shown in the snippet below.
3. In Configure SQL Server Error Logs window you can enter the value between 6 and 99 for the number of error logs and click OK to save the changes
It’s very open ended Question:
As per my understanding, normally Organizations follow below standards
Prod/QA Environment:
For Small size databases (<200 GB)
• Full Backup – Daily
• Differential Backups – NA
• Transaction Log backups – Every 10-15 minutes depending upon the RPO
For Large size databases (>=200 GB)
• Full Backup – Weekly
• Differential Backups – Daily
• Transaction Log backups – Every 10-15 minutes depending upon the RPO
Again it all depends upon the criticality of the database e.g. for Data warehousing databases it may be possible that there is no
requirement of Transaction log backups.
A manual failover occurs after a database administrator issues a failover command and causes a synchronized secondary replica
to transition to the primary role (with guaranteed data protection) and the primary replica to transition to the secondary role.
A manual failover requires that both the primary replica and the target secondary replica are running under synchronouscommit mode, and the secondary replica must already be synchronized.
When the query optimizer generates a query plan, it analyzes what are the best indexes for a particular filter condition. If the
best indexes do not exist, the query optimizer generates a suboptimal query plan, but still stores information about these
indexes. The missing indexes feature enables you to access information about these indexes so you can decide whether they
should be implemented
Authentication is the process of verifying who you are. Logging on to a PC with a username and password is authentication.
Authorization is the process of verifying that you have access to something. Authorization is gaining access to a resource (e.g.
directory on a hard disk) because the permissions configured on it allow you to access it.
Again it all varies on organization’s policies. As a general practice it’s always better to keep the database backups on the shared
location (In the Data Center) at least for 2-3 days. Daily backups should be written to tape backups and should be retained for a
month or so. Also as a preventive measure there should be monthly or bi monthly backups with a retention policy of minimum
one year.
An automatic failover occurs in response to a failure that causes a synchronized secondary replica to transition to the primary
role (with guaranteed data protection). When the former primary replica becomes available, it transitions to the secondary role.
Automatic failover requires that both the primary replica and the target secondary replica are running under synchronous-commit mode with the failover mode set to “Automatic”. In addition, the secondary replica must already be synchronized, have
WSFC quorum, and meet the conditions specified by the flexible failover policy of the availability group.
There are two type of authentication available in SQL Server.
Windows Authentication — TRUSTED connection
Windows Logins
Windows Groups
MIXED authentication — NON Trusted connection
Windows Logins
Windows Groups
SQL Server logins
Database Maintenance Plans allow us to automate many database administration tasks in Microsoft SQL Server. Maintenance
plans create a workflow of the tasks required to make sure that our database is optimized, regularly backed up, and free of
inconsistencies.
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability replica
that is hosted by an FCI can only be configured for manual failover.
sys.dm_db_index_usage_stats – Returns counts of different types of index operations and the time each type of operation was
last performed.
Windows authentication mode requires users to provide a valid Windows username and password to access the database server. In enterprise environments, these credentials are normally Active Directory domain credentials.
Mixed authentication mode allows the use of Windows credentials but supplements them with local SQL Server user accounts
that the administrator may create and maintain within SQL Server
SQL Server 2012 maintenance plans provide the following features:
• Workflow creation using a variety of typical maintenance tasks. You can also create your own custom Transact-SQL scripts.
• Conceptual hierarchies. Each plan lets you create or edit task workflows. Tasks in each plan can be grouped into subplans, which can be scheduled to run at different times.
• Support for multiserver plans that can be used in master server/target server environments.
• Support for logging plan history to remote servers.
• Support for Windows Authentication and SQL Server Authentication. When possible, use Windows Authentication.
Only form of failover is forced manual failover (with possible data loss), typically calledforced failover. Forced failover is
considered a form of manual failover because it can only be initiated manually. Forced failover is a disaster recovery option. It
is the only form of failover that is possible when the target secondary replica is not synchronized with the primary replica.
When we create or rebuild an index, by setting the SORT_IN_TEMPDB option to ON we can direct the SQL Server Database
Engine to use tempdb to store the intermediate sort results that are used to build the index. Although this option increases the
amount of temporary disk space that is used to create an index, the option could reduce the time that is required to create or
rebuild an index when tempdb is on a set of disks different from that of the user database.
Windows authentication is definitely more secure as it’s controlled and authenticated by Active Directory policies.
We have below options in the Maintenance Wizard.
1. Check Database Integrity – checks the logical and physical integrity of all objects in the database
2. Shrink Database – shrinks the files and the logs in the specified database on a file-by-file basis. It pools log files. Physically shrinking makes the file smaller and truncates a file in the backup.
3. Reorganize Index – uses minimal system resources and defragments indexes of tables and views by reordering the pages to match the logical order
4. Rebuild Index – drops and recreates the index reclaiming space and reorders the index rows in contiguous pages
5. Update Statistics – updating statistics ensures queries run with up-to-date statistics, but they also require queries to be recompile, can result in a performance trade-off
6. Clean Up History – deletes entries in the history tables
7. Execute SQL Server agent job – Windows service that runs scheduled jobs
8. Backup Database (Full) – Backs up the entire database, but as databases grow, the backup takes longer and requires more storage. The best solution is to use in conjunction with differential backups.
9. Backup Database (Differential) – used to back up only the data that has changed since the last full backup
10. Backup Database (Transaction Log) – used to back up individual transaction logs, which can be individually restored since the last full backup
11. Maintenance clean up task – performs housekeeping functions
Database administrators use the AlwaysOn Dashboard to obtains an at-a-glance view the health of an AlwaysOn availability group and its availability replicas and databases in SQL Server 2012. Some of the typical uses for the AlwaysOn Dashboard are:
Principals are entities that can request SQL Server resources. A Windows Login is an example of an indivisible principal, and a Windows Group is an example of a principal that is a collection. Every principal has a security identifier (SID). e.g.
Windows-level principals
• Windows Domain Login
• Windows Local Login
SQL Server-level principals
• SQL Server Login
• Server Role
Database-level principals
• Database User
• Database Role
• Application Role
msdb database contains information about all the Database Maintenance plans.
Availability Group Wizard is a GUI using SQL Server Management Studio to create and configure an AlwaysOn availability
group in SQL Server 2012.
Data compression can be configured for the following database objects:
Securable are the resources to which the SQL Server Database Engine authorization system regulates access. For example, a
table is a securable. Some securable can be contained within others, creating nested hierarchies called “scopes” that can
themselves be secured. The securable scopes are server, database, and schema.
• sysdbmaintplan_databases: Contains one row for each database that has an associated upgraded database maintenance plan.
• sysdbmaintplan_history: Contains one row for each upgraded database maintenance plan action performed.
• sysdbmaintplan_jobs: Contains one row for each upgraded database maintenance plan job.
• Sysdbmaintplans: Contains one row for each upgraded database maintenance plan
Securable scope: Server –The server securable scope contains the following securable:
• Endpoint
• Login
• Server role
• Database
Securable scope: Database –The database securable scope contains the following securable:
• User
• Database role
• Application role
• Assembly
• Message type
• Route
• Service
• Remote Service Binding
• Full text catalog
• Certificate
• Asymmetric key
• Symmetric key
• Contract
• Schema
Securable scope: Schema –The schema securable scope contains the following securables:
• Type
• XML schema collection
• Object – The object class has the following members:
o Aggregate
o Function
o Procedure
o Queue
o Synonym
o Table
o View
Yes we can change the Database Maintenance plan anytime.
Data compression is implemented at two levels: ROW and PAGE.
A login is the principal that is used to connect to the SQL Server instance. A user is the principal that is used to connect to a database.
The security context on the instance itself is dictated by the login, its roles and the permissions granted/denied. The security
context on the database is dictated by the user, its roles and the permissions granted/denied.
Yes, we can run TSQL Statement in a Database Maintenance Plan.
The benefits of using a non-clustered column store index are:
SQL Server 2005 introduced the concept of database schemas and the separation between database objects and ownership by
users. An object owned by a database user is no longer tied to that user. The object now belongs to a schema – a container
that can hold many database objects. Schema as a collection of database objects that are owned by a single principal and form
a single namespace
Yes, we run a SQL Server Agent job in a Database Maintenance Plan.
Yes, it can be configured.
DBCC stands for database consistency checker. There are many DBCC command in SQL Server. We generally use these
commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
Bulk Admin: Members of this role can perform Bulk Insert operations on all the databases.
DB Creator: Members of this role can Create/Alter/Drop/Restore a database.
Disk Admin: Members can manage disk files for the server and all databases. They can handle backup devices.
Process Admin: Members of this role can manage and terminate the processes on the SQL Server.
Server Admin: Members of this role can change Server-wide configurations and shutdown SQL Server instance.
Setup Admin: Members of this role can Add/Remove Linked Servers.
Security Admin: Members of this role can create/manage Logins, including changing and resetting passwords as needed, and managing GRANT, REVOKE and DENY permissions at the server and database levels.
SysAdmin: Members of this role have Full Control on the instance and can perform any task.
Public: Public is another role just like Fixed Server Roles, that is by default granted to every login (Windows/SQL)
The History Cleanup task deletes entries in the following history tables in the SQL Server msdb database.
• backupfile
• backupfilegroup
• backupmediafamily
• backupmediaset
• backupset
• restorefile
• restorefilegroup
• restorehistory
By using the History Cleanup task, a package can delete historical data related to backup and restore activities, SQL Server
Agent jobs, and database maintenance plans.
Yes, it can be configured.
Dynamic management views and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.
Dynamic management views and functions return internal, implementation-specific state data. Their schemas and the data
they return may change in future releases of SQL Server. Therefore, dynamic management views and functions in future
releases may not be compatible with the dynamic management views and functions in this release. For example, in futurere leases of SQL Server, Microsoft may augment the definition of any dynamic management view by adding columns to the end
of the column list. We recommend against using the syntax SELECT * FROM dynamic_management_view_name in production
code because the number of columns returned might change and break your application.
Dynamic management views and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.
There are two types of dynamic management views and functions:
Server-scoped dynamic management views and functions. These require VIEW SERVER STATE permission on the server.
Database-scoped dynamic management views and functions. These require VIEW DATABASE STATE permission on the
database.
It uses sp_purge_jobhistory and sp_delete_backuphistory statements.
Yes, it is possible.
When you execute one of these DBCC commands, the Database Engine creates a database snapshot and brings it to a transactionally consistent state. The DBCC command then runs the checks against this snapshot. After the DBCC command is completed, this snapshot is dropped.
Sometimes an internal database snapshot is not required or cannot be created. When this occurs, the DBCC command executes against the actual database. If the database is online, the DBCC command uses table-locking to ensure the consistency of the objects that it is checking. This behavior is the same as if the WITH TABLOCK option were specified. An internal database snapshot is not created when a DBCC command is executed:
The VIEW DEFINITION permission lets a user see the metadata of the securable on which the permission is granted. However, VIEW DEFINITION permission does not confer access to the securable itself. For example, a user that is granted only VIEW DEFINITION permission on a table can see metadata related to the table in the sys.objects catalog view. However, without additional permissions such as SELECT or CONTROL, the user cannot read data from the table.
The VIEW DEFINITION permission can be granted on the following levels:
• Server scope
• Database scope
• Schema scope
• Individual entitie
The Notify Operator task sends notification messages to SQL Server Agent operators. A SQL Server Agent operator is an alias for
a person or group that can receive electronic notifications.
DBCC CHECKDB is an Algorithm which at backend checks that:
1. Object Integrity
2. Linkages for text, ntext, and image pages
3. Index and data pages are correctly linked.
4. Indexes are in their proper sort order.
5. Pointers are consistent.
6. The data on each page is reasonable (Allocation Checks).
7. Page offsets are reasonable.
Guest user permits access to a database for any logins that are not mapped to a specific database user. The guest user cannot be dropped but it can be disabled by revoking the CONNECT permission. The recommendation is not valid for master, msdb and tempdb system databases. If Guest user is disabled in msdb system database, it may cause some issues.
Distribution database is also system database and more information about the Guest User in distribution database can be
found below. It is recommended to disable guest user in every database as a best practice for securing the SQL Server.
• Setup SQL Server Alerts to notify SQL Server Agent Operators on a failure condition.
• Include RAISERROR or TRY\CATCH logic in your backup or restore code to send an email on a failure condition.
DMVs can be used in the gathering of baseline information and for diagnosing performance problems. Few important dmvs are:
1. sys.dm_os_performance_counters
2. sys.dm_db_index_physical_stats
3. sys.dm_db_index_usage_stats
Yes, it is possible to create a Server role in SQL Server 2012.
A restore is a multiphase process. The possible phases of a restore include the data copy, redo (roll forward), and undo (roll back) phases:
• The data copy phase involves copying all the data, log, and index pages from the backup media of a database to the database files.
• The redo phase applies the logged transactions to the data copied from the backup to roll forward that data to the recovery point. At this point, a database typically has uncommitted transactions and is in an unusable state. In that case, an undo phase is required as part of recovering the database.
• The undo phase, which is the first part of recovery, rolls back any uncommitted transactions and makes the database
available to users. After the roll back phase, subsequent backups cannot be restored.
Use below query to list out all available DMVs present in a SQL Installation :-
SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE ‘dm[_]%’
ORDER BY name
Server-Level Views
sys.server_permissions
sys.sql_logins
sys.server_principals
sys.server_role_members
Database-Level Views
sys.database_permissions
sys.database_role_members
sys.database_principals
• RESTORE WITH RECOVERY is the default behavior which leaves the database ready for use by rolling back the uncommitted transactions. Additional transaction logs cannot be restored. Select this option if you are restoring all of the necessary backups now.
• RESTORE WITH NORECOVERY which leaves the database non-operational, and does not roll back the uncommitted
transactions. Additional transaction logs can be restored. The database cannot be used until it is recovered.
There are two types of dynamic management views:
a. Server-scoped DMV: Stored in Master Database
b. Database-scoped DMV: Specific to each database
SQL Server 2005 introduced the following msdb database fixed database roles, which give administrators finer control over access to SQL Server Agent. The roles listed from least to most privileged access are:
• SQL Agent User Role
• SQL Agent Reader Role
• SQL Agent Operator Role
This option leaves the database in read-only mode. It undoes uncommitted transactions, but saves the undo actions in a standby
file so that recovery effects can be reverted.
ssues at the database level, such as a database becoming suspect due to the loss of a data file, deletion of a database, or
corruption of a transaction log, do not cause an availability group to failover.
DBCC INPUTBUFFER returns the last sql statement issued by a client. The command requires the SPID
DBCC INPUTBUFFER (SPID)
SQL Agent Operator Role is the most privileged of the SQL Server Agent fixed database roles. It includes all the permissions of
SQL Agent User Role and SQL Agent Reader Role. Members of this role can also view properties for operators and proxies, and
enumerate available proxies and alerts on the serve.
Requirement of Database restore:
• In case of Database corruption, data issue or Disaster recovery
• In case of Database refresh from production to QA and Dev environment
• In case of upgrade from lower version to upper version in side by side upgrade
sys.dm_db_file_space_usage – Lists space usage information for each file in the database. Reports on unallocated extent page count.
sys.dm_db_session_space_usage – Broken down by each session. Lists the number of pages allocated and deallocated
sys.dm_db_task_space_usage – Broken down by each task. Lists page allocation and deallocation activity
db_datareader: The db_datareader role has the ability to run a SELECT statement against any table or view in the database.
db_datawriter: The db_datawriter role has the ability to modify via INSERT, UPDATE, or DELETE data in any table or view in the database.
db_denydatareader: The db_denydatareader role is the exact opposite of the db_datareader role: instead of granting SELECT permissions on any database object, the db_denydatareader denies SELECT permissions.
db_denydatawriter: db_denydatawriter role serves to restrict permissions on a given database. With this role, the user ispreventing from modifying the data on any data via an INSERT, UPDATE, or DELETE statement
db_accessadmin: The db_accessadmin fixed database role is akin to the securityadmin fixed server role: it has the ability to add and remove users to the database.
The db_accessadmin role does not, however, have the ability to create or remove database roles, nor does it have the ability to manage permissions.
Granted with GRANT option: CONNECT
db secuityadmin: The db_securityadmin role has rights to handle all permissions within a database.
The full list is:
DENY, GRANT, REVOKE, sp_addapprole, sp_addgroup, sp_addrole, sp_addrolemember, sp_approlepassword, sp_changegroup, sp_changeobjectowner, sp_dropapprole, sp_dropgroup, sp_droprole, sp_droprolemember The list includes the DENY, GRANT, and REVOKE commands along with all the store procedures for managing roles
. db_ddladmin: A user with the db_ddladmin fixed database role has rights to issue Data Definition Language (DDL) statements in order to CREATE, DROP, or ALTER objects in the database.
db -backpoperator: db_backupoperatorhas rights to create backups of a database. Restore permissions are not granted, but only backups can be performed.
db_owner: Equal to a sysadmin at instance level, DB_OWNER can perform any task at DB Level.
public: By default all the users in database level are granted Public Role
We normally get a request in the below format.
Source Server & Source Database
Target Server & Target Database
1. First of all we have o extract the permissions of the Target database so that we can retain the permissions of the database after the DB restore from production.
2. Perform the database restore from the source database backup.
3. Once the restore is done then we have to drop the users on the target database which is restored
4. Apply the permissions using the script which was extracted in the first step.
5. Check and resolve any kind of Orphaned users
Yes, it automatically takes care of the automatic page repair.
Some background information on the SQL Schedulers, will make understanding the RUNNABLE – SUSPENDED – RUNNING model clearer.
Schedulers are made up of three parts. A thread cycles though these three parts
1) Processor
2) Waiter list – threads waiting for resources. Use Sys.dm_os_waiting_tasks to view resource waits for the resources
3) Runnable – thread has all the resources and waiting for the processor. Explore runnable status with
the sys.dm_os_schedulers and sys.dm_exec_requests DMVs
This leads us into the RUNNABLE – SUSPENDED – RUNNING
1) RUNNING – thread is executing on the server
2) SUSPENDED – thread is waiting for resources to become available.
3) RUNNABLE – the thread is waiting to execute on the processor
The deny roles are an extra safeguard that you can use to make sure that certain logins or groups will never have the type of
access that is specifically denied under the role. With nesting of groups in Windows and multiple role assignments sometimes
individuals inadvertently end up with excessive permissions. This is just another level that can be applied to lessen these
accidental cracks in the permissions hierarchy.
SQL Server supports inserting named marks into the transaction log to allow recovery to that specific mark. Log marks are
transaction specific and are inserted only if their associated transaction commits. As a result, marks can be tied to specific work,
and you can recover to a point that includes or excludes this work.
ALTER DATABASE Db1 SET HADR AVAILABILITY GROUP = MyAG;
Thread is executing and if waiting for a resource moves to SUSPENDED into the waiter list
An application role is a database principal that enables an application to run with its own, user-like permissions. You can use application roles to enable access to specific data to only those users who connect through a particular application. Unlike database roles, application roles contain no members and are inactive by default. Application roles work with both authentication modes. Application roles are enabled by using sp_setapprole, which requires a password.
Because application roles are a database-level principal, they can access other databases only through permissions granted in
those databases to guest. Therefore, any database in which guest has been disabled will be inaccessible to application roles in other databases.
The user must be a member of either of the below roles
• Sysadmin – fixed server role
• Dbcreator – fixed server role
• db_owner – fixed database role
ALTER DATABASE MyDb2 SET HADR OFF;
GO
The resource is now available and moves to the bottom of the RUNNABLE queue.
A database user for which the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the instance. Such a user is said to be an orphaned user of the database on that server instance.
• A database user can become orphaned if the corresponding SQL Server login is dropped.
• A database user can become orphaned after a database is restored or attached to a different instance of SQL Server.
• Orphaning can happen if the database user is mapped to a SID that is not present in the new server instance
No it’s not possible to restore the upper version database backup to lower version.
SQL Server 2012 AlwaysOn Availability Group supports row and page compression for tables and indexes, we can use the
data compression feature to help compress the data inside a database, and to help reduce the size of the database. We
can use encryption in SQL Server for connections, data, and stored procedures; we can also perform database level
encryption: Transparent data encryption (TDE). If you use transparent data encryption (TDE), the service master key for
creating and decrypting other keys must be the same on every server instance that hosts an availability replica for the
availability group
Top spid at head of RUNNABLE queue moves to processor
To hide an instance of the SQL Server Database Engine
1. In SQL Server Configuration Manager, expand SQL Server Network Configuration, right-click Protocols for, and then select Properties.
2. On the Flags tab, in the Hide Instance box, select yes, and then click OK to close the dialog box. The change takes effect
immediately for new connections.
Point in Time Recovery option gives us the ability to restore a database prior to an event that occurred that was detrimental to your database. In order for this option to work, the database needs to be either in the FULL or Bulk-Logged recovery model and you need to be doing transaction log backups.
High level steps to perform a point in time recovery.
1. Take a tail log backup of the database which needs to be restored
2. Restore the most recent full backup with the NORECOVERY clause
3. Restore the most recent differential backup with the NORECOVERY clause
4. Restore all of the subsequent transaction log backups with the NORECOVERY clause except the last tail log backup
5. Restore the last tail log backup with the RECOVERY clause and a STOPAT statement if the entire transaction log does
not need to be applied
sys.dm_db_index_usage_stats:- Maintains counts for the range of index activity and the last performed time. Also displays statistics ob how an index is used against a query.
sys.dm_db_missing_index_details:- Returns detailed information about each missing index on a table. Information is lost at SQL Server recycle.
sys.dm_db_missing_index_columns:- Returns information about database table columns that are missing an index, excluding spatial indexes.
sys.dm_exec_query_stats:- Performance statistics for cached plans. The information is only available while the plan remains in the cache.
sys.dm_db_index_operational_stats:- Returning IO, locking, latching and access activity. Useful for identifying index hotspots,
waits for read\writes to a table. Will give information about insert,update, and delete
• When possible, use Windows Authentication logins instead of SQL Server logins
• Using server, database and application roles to control access to the data
• Using an ungues sable SA password
• If possible, disable and rename the sa account
• Restricting physical access to the SQL Server
• Disabling the Guest account
• Minimize the number of sys admins allowed to access SQL Server.
• Give users the least amount of permissions they need to perform their job.
• Use stored procedures or views to allow users to access data instead of letting them directly access tables.
• Don’t grant permissions to the public database role.
• Remove user login IDs who no longer need access to SQL Server.
• Avoid creating network shares on any SQL Server.
• Turn on login auditing so you can see who has succeeded, and failed, to login.
• Ensure that your SQL Servers are behind a firewall and are not exposed directly to the Internet.
• Do not use DBO users as application logins
• Firewall restrictions ensure that only the SQL Server listening port is available on the database server.
• Apply the latest security updates / patches
A piecemeal restore sequence restores and recovers a database in stages at the filegroup level, beginning with the primary and
all read-write, secondary filegroups. Piecemeal restore process allows us to restore the primary filegroup first and the database
can be online and the remaining filegroups can be restored while the recovery the transactions are running on primary. Mostly
suitable for data warehousing databases.
This command will reindex your table. If the index name is left out then all indexes are rebuilt. If the fill factor is set to 0 then
this will use the original fill factor when the table was created.
Transparent Data Encryption (TDE) is a feature introduced in SQL Server 2008 and available in later versions for bulk
encryption at the database file level (data file, log file and backup file) i.e. the entire database at rest. Once enabled for a
database, this feature encrypts data into pages before it is written to the disk and decrypts when read from the disk. The best
part of this feature is, as its name implies, it’s completely transparent to your application. This means literally no application
code changes (only administrative change to enable it for a database) are required and hence no impact on the application
code\functionalities when enabling TDE on a database being referenced by that application.
Partial backups are useful whenever you want to exclude read-only filegroups. A partial backup resembles a full database
backup, but a partial backup does not contain all the filegroups. Instead, for a read-write database, a partial backup contains
the data in the primary filegroup, every read-write filegroup, and, optionally, one or more read-only files. A partial backup of a
read-only database contains only the primary filegroup.
Session-timeout period is a replica property that controls how many seconds (in seconds) that an availability replica waits
for a ping response from a connected replica before considering the connection to have failed. By default, a replica waits10 seconds for a ping response. This replica property applies only the connection between a given secondary replica and
the primary replica of the availability group.
DBCC SHRINKFILE: This will allow you to shrink one of the database files. This is equivalent to doing a database shrink, but you
can specify what file and the size to shrink it to. Use the sp_helpdb command along with the database name to see the actual
file names used.
The Service Master Key is the root of the SQL Server encryption hierarchy. It is generated automatically the first time it is
needed to encrypt another key. By default, the Service Master Key is encrypted using the Windows data protection API and
using the local machine key. The Service Master Key can only be opened by the Windows service account under which it was
created or by a principal with access to both the service account name and its password
• Option 1: F1 > D2 > T5
• Option 2: F1 > D1 > T3 > T4 > T5
• Option 3: F1 > T1 > T2 > T3 > T4 > T5
ALTER AVAILABILITY GROUP AccountsAG
MODIFY REPLICA ON 'INSTANCE09' WITH (SESSION_TIMEOUT = 15);
DBCC PROCCACHE – This command will show you information about the procedure cache and how much is being used.
Symmetric Key – In Symmetric cryptography system, the sender and the receiver of a message share a single, common key that is used to encrypt and decrypt the message. This is relatively easy to implement, and both the sender and the receiver can encrypt or decrypt the messages.
Asymmetric Key – Asymmetric cryptography, also known as Public-key cryptography, is a system in which the sender and the
receiver of a message have a pair of cryptographic keys – a public key and a private key – to encrypt and decrypt the message.
This is a relatively complex system where the sender can use his key to encrypt the message but he cannot decrypt it. The
receiver, on the other hand, can use his key to decrypt the message but he cannot encrypt it. This intricacy has turned it into a
resource-intensive process.
• Option 1: F2 > T6
• Option 2: F1 > D2 > T5 > T6
• Option 3: F1 > D1 > T3 > T4 > T5 > T6
• Option 4: F1 > T1 > T2 > T3 > T4 > T5 > T6
As part of the availability group creation process, we have to make an exact copy of the data on the primary replica on the
secondary replica. This is known as the initial data synchronization for the Availability Group.
DBCC TRACEON – This command will turn on a trace flag to capture events in the error log. DBCC TRACEOFF – This command
turns off a trace flag.
SQL Server 2005 contains configuration tools such as a system stored procedure called sp_configure or SQL Server Surface Area Configuration tool (for services and features) in order to enable/disable optional features as needed. Those features are usually installed as disabled by default. Here is the list of the features that can be enabled using the tool:
• xp_cmdshell
• SQL Server Web Assistant
• CLR Integration
• Ad hoc remote queries (the OPENROWSET and OPENDATASOURCE functions)
• OLE Automation system procedures
• System procedures for Database Mail and SQL Mail
• Remote use of a dedicated administrator connection
• Option 1: F2 > D3 >T8 with STOPAT Time stamp of P13
• Option 2: F2 > T6 > T7 > T8 with STOPAT Time stamp of P13
• Option 3: F1 > D2 > T5 > T6 > T7 > T8 with STOPAT Time stamp of P13
• Option 4: F1 > D1 > T3 > T4 > T5 > T6 > T7 > T8 with STOPAT Time stamp of P13
• Option 5: F1 > T1 > T2 > T3 > T4 > T5 > T6 > T7 > T8 with STOPAT Time stamp of P13
We can use DBCC OPENTRAN to check any running transaction on the database. It is one of the most commonly used DBCC
command along with DBCC CHECKDB, DBCC SHRINKFILE, DBCC SQLPERF(logspace) etc.
SQL Server is divided into two main engines: the Relational Engine and the Storage Engine. The Relational Engine contains below components:
• Cmd Parser
• Optimizer
• Query Executor
The Storage Engine contains below components:
• Access Methods code
• Buffer Manager
• Transaction Manager
Database servers get changed with service packs and new releases. New object types get added and the lower versions cannot understand these object types.
In order to avoid such conflicts and problems – Higher end database restorations cannot be performed directly on lower end
database servers.
Yes, we can specify the read_only intent in the connection string and add only secondaries (not the primary) to the
read_only_routing list. If you want to disallow direct connections to the primary from read_only connections, then set its
allow_connections to read_write.
As far as estimating how long DBCC CHECKDB will take to run on a given database, it’s very difficult to tell because there are so many variables involved. The following are some factors that affect DBCC CHECKDB’s run time:
The Relational Engine is also sometimes called the query processor because its primary function is query optimization and execution. T
he main responsibilities of the relational engine are:
• Parsing the SQL statements.
The parser scans an SQL statement and breaks it down into the logical units, such as keywords, parameters, operators, and identifiers. The parser also breaks down the overall SQL statement into a series of smaller logical operations.
• Optimizing the execution plans.
Typically, there are many ways that the server could use data from the source tables to build the result set. The query optimizer determines what these various series of steps are, estimates the cost of each series (primarily in terms of file I/O), and chooses the series of steps that has the lowest cost. It then combines the specific steps with the query tree to produce an optimized execution plan.
• Executing the series of logical operations defined in the execution plan.
After the query optimizer has defined the logical operations required to complete a statement, the relational engine steps through these operations in the sequence specified in the optimized execution plan.
• Processing Data Definition Language (DDL) and other statements.
These statements are not the typical SELECT, INSERT, UPDATE, or DELETE statements; these statements have special processing needs. Examples are the SET statements to set connection options, and the CREATE statements to create objects in a database.
• Formatting results.
The relational engine formats the results returned to the client. The results are formatted as either a traditional, tabular result
set or as an XML document. The results are then encapsulated in one or more TDS packets and returned to the application.
There is no proper method of restore the database from upper version to lower version. However we can use below techniques to perform this task:
• Script out the database Objects and create these on the target database
• Use SQL Server Import Export Wizard to copy data from source server to destination server (only Data)
• Copy data to destination tables using BCP (only Data)
It will be automatically expanded on the Secondary replica.
DBCC DBREINDEX is an offline operation is used to rebuild the indexes of a table dynamically. This operation requires enough
space in the data files. If the space is not enough DBCC DBREINDEX may be unable to rebuild the indexes.
DBCC CHECKDB is used to produce a consistent view of the data by performing a physical consistency check on indexed views,
validating integrity of the indexes, objects etc. in earlier versions of SQL, this required locking. Newer versions involve reading
the transaction log of the oldest active transaction. REDO and UNDO of the transactions affect the volatile changes to available
free space.
SQL Server Network Interface (SNI) is a protocol layer that establishes the network connection between the client and the server. It consists of a set of APIs that are used by both the database engine and the SQL Server Native Client (SNAC). SQL Server has support for the following protocols:
• Shared memory
• TCP/IP
• Named Pipes
• VIA — Virtual Interface Adapter
No, It is not possible.
TDS is a Microsoft-proprietary protocol originally designed by Sybase that is used to interact with a database server. Once a
connection has been made using a network protocol such as TCP/IP, a link is established to the relevant TDS endpoint that
then acts as the communication point between the client and the server.
There is no proper method of restore the database from upper version to lower version. However we can use below techniques to perform this task:
• Script out the database Objects and create these on the target database
• Use SQL Server Import Export Wizard to copy data from
No. But we can
allow SQL Server to automatically create statistics on read-only secondary replicas.
There are two types of dynamic management views and functions:
Server-scoped dynamic management views and functions (e.g OS, IO, Threads, tasks etc). These require VIEW SERVER STATE permission on the server.
Database-scoped dynamic management views and functions (e.g Index, Tables, partition, file etc). These require VIEW
DATABASE STATE permission on the database.
The Command Parser’s role is to handle T-SQL language events. It first checks the syntax and returns any errors back to the protocol layer to send to the client. If the syntax is valid, then the next step is to generate a query plan or find an existing plan. A query plan contains the details about how SQL Server is going to execute a piece of code. It is commonly referred to as an execution plan.
To check for a query plan, the Command Parser generates a hash of the T-SQL and checks it against the plan cache to determine
whether a suitable plan already exists. The plan cache is an area in the buffer pool used to cache query plans. If it finds a match, then the plan is read from cache and passed on to the Query Executor for execution. Otherwise an Execution plan is created by
the optimizer.
No, It is not
possible.
Yes. If the secondary is in synchronous-commit mode and is set to “SYNCHRONIZED” you can manually fail over without data
loss. If the secondary is not in a synchronized state then a manual failover is allowed but with possible data loss
DMV’s are in-memory structures and are anyway’s used by SQL Server internally. It is with SQL Server 2005 that we started
exposing them in an official manner rather than doing bit-manipulations with some DBCC commands. Hence there is nothing
to be worried about the load or memory consumptions. It is not as alarming as you think.
An execution plan is composed of primitive operations. Examples of primitive operations are: reading a table completely, using
an index, performing a nested loop or a hash join. All primitive operations have an output: their result set. Some, like the nested
loop, have one input. Other, like the hash join, has two inputs. Each input should be connected to the output of another primitive
operation. That’s why an execution plan can be sketched as a tree: information flows from leaves to the root.
Mirrored backup media sets are supported only in the Enterprise edition of SQL Server.
Mirroring a media set increases backup reliability by reducing the impact of backup-device malfunctions. These malfunctions
are very serious because backups are the last line of defense against data loss. As database grows, the probability increases
that a failure of a backup device or media will make a backup non restorable. Mirroring backup media increases the reliability
of backups by providing redundancy.
There are two options to configure secondary replica for running read workload. The first option ‘Read-intent-only’ is used to
provide a directive to AlwaysOn secondary replica to accept connections that have the property ApplicationIntent=ReadOnly
set. The word ‘intent’ is important here as there is no application check made to guarantee that there are no DDL/DML
operations in the application connecting with ‘ReadOnly’ but an assumption is made that customer will only connect read
workloads.
Below DMVs can be used to provide query plan related information:
- sys.dm_exec_query_stats
sys.dm_exec_sql_text
sys.dm_exec_query_plan
Plan cache is the part of SQL Server’s buffer pool, is used to store execution plans in case they are needed later when the same
type of scripts are submitted by the users.
No it’s not possible to make Primary File Group read only.
Yes. If a corrupt page is detected, SQL Server will attempt to repair the page by getting it from another replica.
sys.dm_audit_actions
sys.dm_audit_class_type_map
sys.dm_cryptographic_provider_properties
sys.dm_database_encryption_keys
sys.dm_server_audit_status
The Optimizer is one of the important assets of a database engine. This is the component on which a particular RDBMS stands
off. The primary function of the optimizer is to generate execution plan.
Filegroups can be marked as read-only. Any existing filegroup, except the primary filegroup, can be marked as read-only. A filegroup marked read-only cannot be modified in any way. Read-only filegroups can be compressed.
ALTER DATABASE ReadFilegroup MODIFY FILEGROUP Test1FG1 Read_Only;
sys.dm_os_buffer_descriptors
sys.dm_os_child_instances
sys.dm_os_cluster_nodes
sys.dm_os_hosts sys.dm_os_nodes
sys.dm_os_memory_pools
sys.dm_os_performance_counters
sys.dm_os_process_memory
sys.dm_os_schedulers
sys.dm_os_memory_objects
sys.dm_os_workers
The Query Executor’s job is self-explanatory; it executes the query. To be more specific, it executes the query plan by working
through each step it contains and interacting with the Storage Engine to retrieve or modify data.
• Can be compressed (using NTFS compression)
• During recovery you don’t need to apply logs to recover a read-only file group
• Protection of data from accidental modifications
Replication is subset of SQL Server that can move data and database objects in an automated way from one database to
another database. This allows users to work with the same data at different locations and changes that are made are
transferred to keep the databases synchronized.
sys.dm_db_file_space_usage
sys.dm_db_partition_stats
sys.dm_db_session_space_usage
sys.dm_db_task_space_usage
Access Methods is a collection of code that provides the storage structures for your data and indexes, as well as the interface through which data is retrieved and modified. It contains all the code to retrieve data but it doesn’t actually perform the operation itself; it passes the request to the Buffer Manager.
Suppose our SELECT statement needs to read just a few rows that are all on a single page. The Access Methods code will ask
the Buffer Manager to retrieve the page so that it can prepare an OLE DB rowset to pass back to the Relational Engine.
Please refer to the below URL to get a SQL Script to extract the user permissions.
http://dbathings.com/database-permissions-extraction-script/
sys.dm_db_mirroring_connections
sys.dm_db_mirroring_auto_page_repair
The buffer management component consists of two mechanisms: the buffer manager to access and update database pages, and the buffer cache (also called the buffer pool), to reduce database file I/O.
The Buffer Manager, as its name suggests, manages the buffer pool, which represents the majority of SQL Server’s memory usage. If you need to read some rows from a page, the Buffer Manager checks the data cache in the buffer pool to see if it already has the page cached in memory. If the page is already cached, then the results are passed back to the Access Methods.
If the page isn’t already in cache, then the Buffer Manager gets the page from the database on disk, puts it in the data cache,
and passes the results to the Access Methods.
The only time that you can do this is when the DB was shut down cleanly before the log file was lost. It’s still not a good idea. While attaching a data file without the log file may be possible in some circumstances, it is not a recommended approach and is only for cases when the log file has been damaged or lost due to hardware problems and there are no backups available. Of course, you cannot have a database without a log file, so SQL Server just recreates a log file when you attach the database.
Attaching a data file without the log file breaks the log chain and may render the database transactionally or structurally
inconsistent depending on the state of the database before the log file was lost.
Execution Related
• sys.dm_exec_connections
• sys.dm_exec_sessions
• sys.dm_exec_requests
• sys.dm_exec_cached_plans
• sys.dm_exec_query_plans
• sys.dm_exec_sql_text
• sys.dm_exec_query_stats
Index Related
• sys.dm_db_index_physical_stats
• sys.dm_db_index_usage_stats
SQL Server Operating System
• sys.dm_os_performance_counters
• sys.dm_os_schedulers
• sys.dm_os_nodes
• sys.dm_os_waiting_tasks
• sys.dm_os_wait_stats
I/O Related
• sys.dm_io_virtual_file_stats
Buffer Pool consist of various type of cache like data cache, plan cache, log cache etc. Here data cache is the very important part
of buffer pool which is used to store the various types of pages to serve particular query. Suppose if we run a particular select
query on a table to show all data rows of that table. Then all the data pages of that table will be required to fulfill the requirement
of this query. Here first all data pages will move from disk to buffer pool. This operation of reading data pages from disk to
memory is known as physical IO. But if we running the same query again then there is no need to read data pages from disk to
buffer pool because all the data pages are already in buffer pool. This operation is known as Logical IO.
Yes, it is possible to perform a point in time recovery with Bulk logged recovery model till the time we don’t perform any
minimal logged operation on the database.
The reason is in the subscriber, rows are updated/deleted one-by-one using primary key.
For example:
If you delete 100 rows in the publisher using a single DELETE statement, in the subscriber 100 DELETE statements would be executed.
— on publisher
DELETE FROM dbo.tbAddress WHERE City = ‘LONDON’
— on subscriber
DELETE FROM dbo.tbAddress WHERE pk = @pk
Transaction Manager interacts with the Access Methods and has two components thorugh which it works on the transactions.Lock Manager: It is responsible for providing concurrency to the data, and it delivers the configured level of isolation by using locks.
Log Manager: It writes the changes to the transaction log. Writing to the transaction log is the only part of a data modification
transaction that always needs a physical write to disk because SQL Server depends on being able to reread that change in the
event of system failure
A high availability solution masks the effects of a hardware or software failure and maintains the availability of applications so
that the perceived downtime for users is minimized.SQL Server high-availability solutions improve the availability of servers or
databases.
Primary Key:
This is a basic rule that every article should have a Primary Key to be a candidate table for Transactional Replication. Primary keys are used to maintain uniqueness of records and to maintain referential integrity between tables, and that is why it is recommended for every article to have a primary key.
Securing snapshot folder:
Schedule:
Network bandwidth:
Enough disk space for database being published:
We need to make sureWe need to make sure that we have ample space available for the transaction log for the published database, as it will continue to grow and won’t truncate the log records until they are moved to the distribution database. Please note that even in simple recovery model, the log fle can grow large if replication breaks. That is the reason it is recommended to set T-log’s auto grow option to value “true”. We should also make sure that the distribution database is available and log reader agent is running.
Enough disk space for distribution database:
It is necessary to have enough disk space allocated to the distribution database. This is because the distribution database will store the transactions marked for replication until it is applied to the subscriber database within the limit of retention period of distribution (which is 72 hours by default), or it will retain the transactions until the snapshot agent re-runs and creates a new snapshot. re-runs and creates a new snapshot.
Use domain account as service account:
We should always use the domain account as a service account, so that when agents access the shared folder of snapshot fles,
it won’t have any problem just because they are local to the system and do not have permission to access network share.
While mentioning service account, we are asked to choose from two built-in accounts including Local System account,
Network Services, and this account, wherein we have to specify the domain account on which the service account will run.
At the time a modification is made to a page in the buffer, a log record is built in the log cache recording the modification. This
log record must be written to disk before the associated dirty page is flushed from the buffer cache to disk. SQL Server has logic
that prevents a dirty page from being flushed before the associated log record. Because log records are always written ahead of
the associated data pages, the process is called a write-ahead logging.
At a high level, there are five main high availability options including a new feature set to be release with SQL Server 2012:
• Log Shipping
• Mirroring
• Replication
• Clustering
• AlwaysON
DBCC TRACESTATUS can be used to find our all active trace flags
When a page is read from disk into memory it is regarded as a clean page because it’s exactly the same as its counterpart on the disk. However, once the page has been modified in memory it is marked as a dirty page.
A dirty page is simply a page that has changed in memory since it was loaded from disk and is now different from the on-disk
page.
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then
restoring them onto a standby server. But this is not all. The key feature of log shipping is that it will automatically backup
transaction logs throughout the day (for whatever interval you specify) and automatically restore them on the standby server.
This in effect keeps the two SQL Servers Databases in “synch”.
Publisher
The Publisher is a server that makes data available for replication to other servers. In addition to being the server where you specify which data is to be replicated, the Publisher also detects which data has changed and maintains information about all publications at that site. Usually, any data element that is replicated has a single Publisher, even if it may be updated by several Subscribers or republished by a Subscriber. The publication database is the database on the Publisher that is the source of data and database objects to be replicated. Each database used in replication must be enabled as a publication database either through the Configure Publishing and Distribution Wizard, the Publisher and Distributor properties, by using thesp_replicationdboption system stored procedure, or by creating a publication on that database using the Create Publication Wizard.
Distributor
The Distributor is a server that contains the distribution database and stores meta data, history data, and/or transactions. The Distributor can be a separate server from the Publisher (remote Distributor), or it can be the same server as the Publisher (local Distributor). The role of the Distributor varies depending on which type of replication you implement, and in general, its role is much greater for snapshot replication and transactional replication than it is for merge replication.
Subscribers
are servers that receive replicated data. Subscribers subscribe to publications, not to individual articles within a
publication, and they subscribe only to the publications that they need, not necessarily all of the publications available on a
Publisher. If you have applications using transactional replication built with Microsoft® SQL Server™ version 6.5 or later, and
those applications subscribe directly to articles instead of to publications, the applications will continue to work in SQL Server
2000. However, you should begin to migrate your subscriptions to the publication level where each publication is composed of
one or more articles.
There are two types of trace flags: session and global. Session trace flags are active for a connection and are
visible only to that connection. Global trace flags are set at the server level and are visible to every
connection on the server. Some flags can only be enabled as global, and some can be enabled at either
global or session scope.
Dirty pages are written to disk on the following events.
• Lazy writing is a process to move pages containing changes from the buffer onto disk. This clears the buffers for us by other pages.
• Checkpoint writes all dirty pages to disk. SQL Server periodically commits a CHECKPOINT to ensure all dirty pages are flushed to disk.
• Explicitly issuing a CHECKPOINT will force a checkpoint
Examples of events causing a CHECKPOINT
1. net stop mssqlserver
2. SHUTDOWN
3. ALTER DATABASE adding a file
• Eager writing – Nonlogged bcp, SELECT INTO, WRITETEXT,UPDATETEXT,BULK INSERT are examples of non-logged
operations. To speed up the tasks , eager writing manages page creation and page writing in parallel. The requestor
does not need to wait for all the page creation to occur prior to writing pages
Benefits of log shipping, let’s take a more comprehensive look:
• Log shipping doesn’t require expensive hardware or software. While it is great if your standby server is similar in capacity to your production server, it is not a requirement.
• Once log shipping has been implemented, it is relatively easy to maintain.
• Assuming you have implemented log shipping correctly, it is very reliable.
• The manual failover process is generally very short, typically 15 minutes or less.
• Implementing log shipping is not technically difficult.
Article
An article identifies a database object that is included in a publication. A publication can contain different types of articles, including tables, views, stored procedures, and other objects. When tables are published as articles, filters can be used to restrict the columns and rows of the data sent to Subscribers.
Publication
A publication is a collection of one or more articles from one database. The grouping of multiple articles into a publication makes it easier to specify a logically related set of database objects and data that are replicated as a unit.
Subscribe
A subscription is a request for a copy of a publication to be delivered to a Subscriber. The subscription defines what
publication will be received, where, and when. There are two types of subscriptions: push and pull.
Select permissions for DBCC Show_Statistics can be disabled using Trace flag 9854.
A checkpoint is a point in time created by the checkpoint process at which SQL Server can be sure that any committed
transactions have had all their changes written to disk. This checkpoint then becomes the marker from which database
recovery can start. The checkpoint process ensures that any dirty pages associated with a committed transaction are flushed
to disk.
• Log shipping failover is not automatic. The DBA must still manually failover the server, which means the DBA must be present when the failover occurs.
• The users will experience some downtime. How long depends on how well you implemented log shipping, the nature of the production server failure, your network, the standby server, and the application or applications to be failed over.
• Some data can be lost, although not always. How much data is lost depends on how often you schedule log shipping and whether or not the transaction log on the failed production server is recoverable.
• The database or databases that are being failed over to the standby server cannot be used for anything else. But databases on the standby server not being used for failover can still be used normally.
• When it comes time for the actual failover, you must do one of two things to make your applications work: either
rename the standby server the same name as the failed production server (and the IP address), or re-point your
user’s applications to the new standby server. In some cases, neither of these options is practical.
It is not necessary to stop activity on the publication or subscription databases in order to add a table (or another object). Add
a table to a publication through the Publication Properties –
Lazywriter also flushes dirty pages to disk. SQL Server constantly monitors memory usage to assess resource contention (or
availability); It’s job is to make sure that there is a certain amount of free space available at all times. As part of this process,
when it notices any such resource contention, it triggers LazyWriter to free up some pages in memory by writing out dirty pages
to disk. It employs Least Recently Used (LRU) algorithm to decide which pages are to be flushed to the disk.
Log Shipping works with Full and Bulk Logged recovery model.
Returns replication statistics about latency, throughput, and transaction count for each published database. This stored
procedure is executed at the Publisher on any database.
Log Flush also writes pages to disk. The difference here is that it writes pages from Log Cache into the Transactional log file (LDF). Once a transaction completes, LogFlush writes those pages (from Log Cache) to LDF file on disk.
Each and every transaction that results in data page changes, also incurs some Log Cache changes. At the end of each transaction
(commit), these changes from Log Cache are flushed down to the physical file (LDF).
Yes it is possible to configure Log Shipping from lower to upper version. But it is not possible vice versa.
Oracle and DB2 can subscribe to snapshot and transactional publications using push subscriptions. Subscriptions are supported for the two most recent versions of each database listed using the most recent version of the OLE DB provider listed.
However, Heterogeneous replication to non-SQL Server subscribers is deprecated. Oracle Publishing is deprecated. To move data, create solutions using change data capture and SSIS.
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature.
When a record is deleted from a clustered index data page or non-clustered index leaf page or a versioned heap page or a
forwarded record is recalled, the record is logically removed by marking them as deleted but not physically removed from the
page immediately. Pages which are marked as deleted but actually not deleted physically are called Ghost Records.
This setting enables us to setup a monitor on the Log shipping through which we can monitor the log shipping process.
Transactional replication provides the tracer token feature, which provides a convenient way to measure latency in transactional replication topologies and to validate the connections between the Publisher, Distributor and Subscribers. A token (a small amount of data) is written to the transaction log of the publication database, marked as though it were a typical replicated transaction, and sent through the system, allowing a calculation of:
Ghostcleanuptask: SQL Server Ghostcleanuptask thread physically removes the records which are marked as deleted.
Internally when Log Shipping is configured, there are 4 jobs created between Primary Server and Secondary Server, they are Backup Job, Copy Job, Restore Job and Alert Job
• Backup job: This job is created on Primary Server; this job takes the transaction log backup of the Database on a scheduled time
• Copy Job: This job is created on Secondary Server, this job Copies the transaction log Backup from Primary Server to the Standby/Secondary Server.
• Restore Job: This job is created on Secondary Server; this job restored the copied transaction log backup on the
Secondary Server.
The replmonitor database role in the distribution database. These users can monitor replication, but cannot change any
replication properties.
• Ghostcleanuptask thread wakes up every 10 seconds.
• Sweepdatabases one by one starting from master.
• Skip the database if it is not able to take ashared lock for database (LCK_M_S) or database is not in Open read/write state.
• Scans the PFS pages of the current database to get the pages which has ghost records.
1. PFS Page:A PFS page occurs once in 8088 pages. SSQL Server will attempt to place a PFS page on the first page of every PFS interval(8088Pages). The only time a PFS page is not the first page in its interval is in the first interval for a file.
2. In this case, the file header page is first, and the PFS page is second. (Page ID starts from 0 so the first PFS page is at Page ID 1)
• Remove the records which are marked as deleted (ghosted) physically
• For the backup job, read/write permissions to the backup directory are required to the following:
SQL Server service account on the primary server instance.
Proxy account of the backup job. By default, this is the SQL Server Agent account on the primary server instance.
• For the copy job, read permissions to the backup directory and write permissions to the copy directory are required by the proxy account of the copy job. By default, this is the SQL Server Agent account on the secondary server instance.
• For the restore job, read/write permission to the copy directory are required by the following: The SQL Server service
account on the secondary server instance. The proxy account of the restore job. By default, this is the SQL Server
Agent account on the secondary server instance.
There are four replication related DMV’s in SQL Server.
sys.dm_repl_articles
sys.dm_repl_schemas
sys.dm_repl_tranhash
sys.dm_repl_traninfo
• Shared memory — Simple and fast, shared memory is the default protocol used to connect from a client running on the same computer as SQL Server. It can only be used locally, has no configurable properties, and is always tried first when connecting from the local machine.
• TCP/IP — This is the most commonly used access protocol for SQL Server. It enables you to connect to SQL Server by specifying an IP address and a port number. Typically, this happens automatically when you specify an instance to connect to. Your internal name resolution system resolves the hostname part of the instance name to an IP address, and either you connect to the default TCP port number 1433 for default instances or the SQL Browser service will find the right port for a named instance using UDP port 1434.
• Named Pipes — TCP/IP and Named Pipes are comparable protocols in the architectures in which they can be used. Named Pipes was developed for local area networks (LANs) but it can be inefficient across slower networks such as wide area networks (WANs).
• VIA — Virtual Interface Adapter is a protocol that enables high-performance communications between two systems.
It requires specialized hardware at both ends and a dedicated connection.
• Created in secondary server for every log shipping configuration.
• Copy the backup files from backup folder into copy folder.
• It deletes old files and old history from copy folder.
• On backup folder we have to grant read permission to secondary server account and read-write permissions on copy
folder.
Snapshot Replication would be good to use if:
1. if you are sure that you would synchronize only once in a day and your business requirements do not include replicating transactions as and when they are comitted on the publisher
2. If the size of the replicating articles is small – may be a few MBs/GBs
3. If it is does not matter that for some time the replicating articles would be locked (till the snapshot would be generated)
Transactional Replication would be good to use if:
1. You want incremental changes to be propagated to Subscribers as they occur.
2. The application requires low latency between the time changes are made at the Publisher and the changes arrive at the Subscriber.
3. The application requires access to intermediate data states. For example, if a row changes five times, transactional replication allows an application to respond to each change (such as firing a trigger), not simply the net data change to the row.
4.The Publisher has a very high volume of insert, update, and delete activity.
2509
SQL Server 2008 increases these capabilities by adding hot-add CPU as well. ‘Hot ADD’ means being able to plug in a CPU while the machine is running and then reconfigure SQL Server to make use of the CPU ONLINE! (i.e. no application downtime required at all)
There are a few restrictions:
• Need a 64-bit system that support hot-add CPU (obviously :-))
• Need Enterprise Edition of SQL Server 2008
• Need Windows Server Datacenter or Enterprise Edition
An optional instance of SQL Server that tracks all of the details of log shipping, including:
• When the transaction log on the primary database was last backed up.
• When the secondary servers last copied and restored the backup files.
• Information about any backup failure alerts.
Peer-to-peer replication provides a scale-out and high-availability solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables applications that require scale-out of read operations to
distribute the reads from clients across multiple nodes. Because data is maintained across the nodes in near real-time, peer-to-peer replication provides data redundancy, which increases the availability of data.
2514
When SQL Server runs on a computer with more than one processor or CPU, it detects the best degree of parallelism that is
the number of processors employed to run a single statement, for each query that has a parallel execution plan. You can usethe max degree of parallelism option to limit the number of processors to use for parallel plan execution and to prevent runaway queries from impacting SQL Server performance by using all available CPUs.
SQL Server Log Shipping System Tables
• log_shipping_monitor_primary – Stores one monitor record per primary database in each log shipping configuration.
• log_shipping_monitor_secondary – Stores one monitor record per secondary database in a log shipping configuration.
• log_shipping_primary_databases – Stores one record for the primary database in a log shipping configuration.
• log_shipping_secondary – Stores one record per secondary ID
Merge replication allows multiple nodes to make autonomous data changes, so situations exist in which a change made at one node may conflict with a change made to the same data at another node. In other situations, the Merge Agent encounters an error such as a constraint violation and cannot propagate a change made at a particular node to another node.
The Merge Agent detects conflicts by using the lineage column of theMSmerge_contents system table; if column-level tracking is enabled for an article, theCOLV1 column is also used. These columns contain metadata about when a row orcolumn is inserted or updated, and about which nodes in a merge replication topology made changes to the row or column. You can use the system stored proceduresp_showrowreplicainfo (Transact-SQL) to view this metadata.
As the Merge Agent enumerates changes to be applied during synchronization, it compares the metadata for each row at the Publisher and Subscriber. The Merge Agent uses this metadata to determine if a row or column has changed at more than one node in the topology, which indicates a potential conflict. After a conflict is detected, the Merge Agent launches the conflict resolver specified for the article with a conflict and uses the resolver to determine the conflict winner. The winning row is applied at the Publisher and Subscriber, and the data from the losing row is written to a conflict table.
Conflicts are resolved automatically and immediately by the Merge Agent unless you have chosen interactive conflict
resolution for the article.
Batch. An SQL batch is a set of one or more Transact-SQL statements sent from a client to an instance of SQL Server for execution. It represents a unit of work submitted to the Database Engine by users.
Task. A task represents a unit of work that is scheduled by SQL server. A batch can map to one or more tasks. For example, a parallel query will be executed by multiple tasks.
Windows Thread: A windows thread represents an independent execution mechanism.
Fiber. A fiber is lightweight thread that queries fewer resources than a windows thread and can switch context when in user mode. One Windows thread can be mapped to many fibers.
Worker Thread
The worker thread represents a logical thread in SQL Server that is internally mapped (1:1) to either a windows thread or, if
lightweight pooling is turned ON, to a fiber. The mapping is maintained until worker thread is deallocated either because of
memory pressure, or if it has been idle for long time. The association task to a worker thread is maintained for the life of the
task.
SQL Server will issue a series of steps to synchronize the information between primary/secondary server and the monitor server. This can be implemented by running the below undocumented log shipping stored procedures:
• sp_processlogshippingmonitorprimary
• sp_processlogshippingmonitorsecondary
Transactional replication supports publishing LOBs and performs partial updates on LOB columns: if a LOB column is updated, only the fragment of data changed is replicated, rather than all the data in the column.
If a published table includes any LOBs, consider using the following Distribution Agent parameters: -UseOledbStreaming, - OledbStreamThreshold, and –Packet Size. The most straightforward way to set these parameters is to use the Distribution Agent profile titled Distribution Profile for OLEDB streaming.
The process of replicating text, ntext and image data types in a transactional publication is subject to a number of
considerations. It is recommend that you use the data types varchar(max), nvarchar(max), varbinary(max) instead of text,
ntext, and image data types, respectively.
SQL Server databases have three types of files:
• Primary data files
• Secondary data files
• Transaction Log files
Yes, it’s possible in log shipping, while configuring log shipping we have the option to choose standby or no recovery mode,
there we select STANDBY option to make the secondary database readonly.
No. we would need to drop the publications, rename the database and re-configure replication all over again. So there is no
easy way to do this.
By enabling trace 3226
Interview questions on Stored Procedures
Primary data files
The primary data file is the starting point of the database and points to the other files in the database. Every database has one primary data file. The recommended file name extension for primary data files is.mdf.
Secondary data files
Secondary data files make up all the data files, other than the primary data file. Some databases may not have any secondary data files, while others have several secondary data files. The recommended file name extension for secondary data files is .ndf.
Transaction Log file
This file holds all the log information that is used to recover the database. There must be at least one log file for each
database, although there can be more than one. The recommended file name extension for log files is .ldf.
We can reduce the load on our primary database by using a secondary server for read-only query processing. To do this, the
secondary database must be in STANDBY mode.
No. You could create a DTS\SSIS package to transfer logins and passwords from a Publisher to one or more Subscribers.
Stored Procedures are a batch of SQL statements that can be executed in a couple of ways. Most major
DBMs support stored procedures; however, not all do
Primary data file contains system objects where as secondary data files contains all user defined Database objects if these are not part of the Primary File group.
One of the important difference between Primary and Secondary data files is BOOT PAGE. Page Number 9 is the boot page
Page type 13). Boot page is available as 9th page only in the primary data file.
There are two options for configuration when we place the secondary database in standby mode:
• We can choose to have database users disconnected when transaction log backups are being restored. If we choose this option, users will be disconnected from the database each time the log shipping restore job attempts to restore a transaction log to the secondary database. Disconnection will happen on the schedule you set for the restore job.
• We can choose not to disconnect users. In this case, the restore job cannot restore transaction log backups to the
secondary database if there are users connected to that database. Transaction log backups will accumulate until there
are no user connections to the database.
No special considerations are required because all data is stored on one set of disks on the cluster.
The length of time that the locks are taken depends on the type of replication used:
Because locks prevent other users from updating the tables, the Snapshot Agent should be scheduled to execute during
periods of lower activity on the database, especially for snapshot publications.
Database File groups: Database objects and files can be grouped together in file groups for allocation and administration purposes. There are two types of file groups:
Primary: The primary file group contains the primary data file and any other files not specifically assigned to another file group. All pages for the system tables are allocated in the primary file group.
User-defined: User-defined file groups are any file groups that are specified by using the FILEGROUP keyword in a CREATE
DATABASE or ALTER DATABASE statement.
The log shipping will not work. We have to setup the Log Shipping again.
Replication is not dependent on any particular recovery model. A database can participate in replication whether it is in
simple, bulk-logged, or full. However how data is tracked for replication depends on the type of replication used.
No, it will not be created on the secondary server.
Replication supports publishing articles in multiple publications (including republishing data) with the following restrictions:
Yes it is possible to change the Default File group to user defined File group. All the newly created objects will be created in User
defined File group be default
Yes, it will be created automatically on the secondary database.
Yes. There are no restrictions on the number or types of publications that can use the same distribution database. All publications from a given Publisher must use the same Distributor and distribution database.
If you have multiple publications, you can configure multiple distribution databases at the Distributor to ensure that the data
flowing through each distribution database is from a single publication. Use the Distributor Properties dialog box
or sp_adddistributiondb (Transact-SQL) to add a distribution database.
Yes, it will be created automatically on the Secondary database if the file is added to the Primary database.
No. Replication does not encrypt data that is stored in the database or transferred over the network.
Log files are never part of a file group. Log space is managed separately from data space.
No, Log Shipping will hang. We have to manually restore the Log backup with MOVE option on the secondary database to rectify
the issue.
For transactional replication, bulk inserts are tracked and replicated like other inserts. For merge replication, you must ensure
that change tracking metadata is updated properly.
Below are the major benefits which can be achieved using multiple data files and placing these files in separate file groups on separate disk drives.
1. Disk I\O Performance
2. Easy Management and Archiving of the data
3. Benefit of doing File Group level Backups and restores
4. Usage of File Groups in Portioning of the tables
TRUNCATE TABLE is a non-logged operation that does not fire triggers. It is not permitted because replication cannot track the
changes caused by the operation: transactional replication tracks changes through the transaction log; merge replication
tracks changes through triggers on published tables.
No, there is no benefit of adding multiple log files in a database as the write operations in a Transaction log files are always
serial.
Yes, we can configure Log Shipping on the server residing in different domains.
In some cases, it is desirable for user activity in a replication topology to be treated differently from agent activity. For example, if a row is inserted by a user at the Publisher and that insert satisfies a check constraint on the table, it might not be required to enforce the same constraint when the row is inserted by a replication agent at the Subscriber. The NOT FOR REPLICATION option allows you to specify that the following database objects are treated differently when a replication agent performs an operation:
Create database Test;
Yes, this script will work because rest of the parameters will be taken from model database and Files will be located to the
respective folders which are set at the SQL Server instance level.
Yes, we can configure Log Shipping on the server residing in different domains.
Yes. Replication processing resumes at the point at which it left off if a connection is dropped. If you are using merge
replication over an unreliable network, consider using logical records, which ensures related changes are processed as a unit.
This procedure closes the current error log file and cycles the error log extension numbers just like a server
restart.
We can execute the below Log Shipping System Stored Procedure to monitor log shipping and get detailed information about log shipping.
• sp_help_log_shipping_monitor – This is the how SQL Server generates the Log Shipping Status report by executing
• sys.sp_help_log_shipping_monitor – This procedure returns the log shipping status (whether it is healthy or not) as well as metadata such as primary and secondary database names, time since last backup, last backup file, last restore file, etc…
• sp_help_log_shipping_monitor_primary – returns all columns from the log_shipping_monitor_primary table for the specified primary log shipping database. It returns server name, database name, time of last backup, backup threshold, threshold alert and history retention period.
• sp_help_log_shipping_monitor_secondary – returns all columns from log_shipping_monitor_secondary table for the
specified secondary log shipping database. It will return database name, server name, restore threshold, last copied
file, time of last copy / restore and history retention period.
In versions of SQL Server prior to SQL Server 2005, moving or renaming database files required detaching and reattaching the
database. Because a replicated database cannot be detached, replication had to be removed from these databases first.
Beginning with SQL Server 2005, you can move or rename files without detaching and re-attaching the database, with no
effect on Replication
Running with view name as an argument – updates the metadata for the specified non-schema-bound view.
Persistent metadata for a view can become outdated because of changes to the underlying objects upon
which the view depends.
Each database file that is associated with your database has an auto-growth setting. There are three different settings you can use to identify how your database files will grow. They can grow by a specific size, a percentage of the current size, or not grow at all.
Additionally, you can set your files to unrestricted growth, which means they will keep growing as they need more space or you
run out of disk space. Or you can restrict the growth of a database file to grow no larger than a specified size. Each one of these
different auto-grow setting have defaults, or you can set them for each database file
Yes, we can setup multiple secondary databases in Log Shipping.
Yes it can be done and there are no restrictions on the number or types of publications that can use the same distribution
database. One thing to note though is that all publications from a Publisher must use the same Distributor and distribution
database.
If you are required to set the setting for Auto growth of Transaction log file, it should always be in a specific size instead of
percentage.
Yes, we can shrink the log file, but we shouldn’t use WITH TRUNCATE option. If we use this option obviously log shipping will
be disturbed.
Yes it’s possible. We can take full backup of log shipped database and this won’t affect the log shipping.
There can be a number of possible causes for data not being delivered to Subscribers:
AUTO CLOSE option:
When set to ON, the database is shut down cleanly and its resources are freed after the last user exits. The database automatically reopens when a user tries to use the database again.
When set to OFF, the database remains open after the last user exits.
• 2012 – Enterprise, Business Intelligence, Standard, and Web
• 2008R2 – Datacenter, Enterprise, Standard, Web, and Workgroup
• 2008 – Enterprise, Standard, Web, and Workgroup
• 2005 – Enterprise, Standard, and Workgroup
AUTO SHRINK option: When set to ON, the database files are candidates for periodic shrinking. Both data file and log files can be shrunk automatically by SQL Server. AUTO_SHRINK reduces the size of the transaction log only if the database is set to SIMPLE recovery model or if the log is backed up.
When set to OFF, database files are not automatically shrunk during periodic checks for unused space.
No, we won’t be able to execute BACKUP command against a log shipped database in secondary server.
The easiest way to monitor replication activity and performance is to use replication monitor, but you can also use the below tools to monitor replication performance:
Database mirroring was introduced with Microsoft SQL Server 2005 technology that can be used to design high-availability and high-performance solutions for database redundancy.
In database mirroring, transaction log records are sent directly from the principal database to the mirror database. This helps to
keep the mirror database up to date with the principal database, with no loss of committed data. If the principal server fails, the
mirror server automatically becomes the new principal server and recovers the principal database using a witness server under
high-availability mode. We will discuss these modes later. Fundamentally to summarize there are three jargon to understand –
Principal database is the active live database that supports all the commands, Mirror is the hot standby and witness which allows
for a quorum in case of automatic switch over.
No. Activity can continue on a database while a publication is being created. Be aware that producing a snapshot can be
resource-intensive, so it is best to generate snapshots during periods of lower activity on the database (by default a snapshot
is generated when you complete the New Publication Wizard).
This option controls who and how many users can connect to a database.
When SINGLE_USER is specified, one user at a time is allowed to connect to the database. All other user connections are broken.
When RESTRICTED_USER is specified, only members of the db_owner fixed database role and dbcreator and sysadmin fixed server roles are allowed to connect to the database, but it does not limit their number.
When MULTI_USER is specified, all users that have the appropriate permissions to connect to the database are allowed.
You cannot drop a table that is replicated. You have to first drop the article.
Each data and transaction log file in a SQL Server database has two names:
logical_file_name
The logical_file_name is the name used to refer to the physical file in all Transact-SQL statements. The logical file name must comply with the rules for SQL Server identifiers and must be unique among logical file names in the database.
os_file_name
The os_file_name is the name of the physical file including the directory path. It must follow the rules for the operating system
file names.
Issuing ALTER TABLE … DROP COLUMN at the Publisher, will result in the command being replicated to the Subscriber.
ROW_OVERFLOW_DATA : Assume that a table is created with record size 12000 bytes having 4 varchar data types of size 4000
bytes. Whenever user inserts a record with size greater than 8000 (page size is 8K), then the excess data is moved to
ROW_OVERFLOW_DATA pages. In simple terms, ROW_OVERFLOW_DATA pages will come in to picture only when the row size
exceed page maximum limit.
New columns being added to a published article shouldn’t be replicated to the subscriber unless they really need to be there.
You can turn off the replication of schema changes by setting the ‘Replicate Schema Changes’ subscription option to ‘false’. (It
defaults to ‘true’.)
“Static row filters” allow you to include only certain rows in a given publication. There is overhead to applying the row filter
itself: Microsoft only recommends you use the row filters if your replication setup can’t handle replicating the full dataset.
Trustworthy property :When ON, database modules (for example, user-defined functions or stored procedures) that use an impersonation context can access resources outside the database.
When OFF is specified, in an impersonation context cannot access resources outside the database.
TRUSTWORTHY is set to OFF whenever the database is attached.
Peer-Peer Replication decreases / nullifies the dependency on Distributor. In this Replication topology each node is Publisher,
Distributor and Subscriber. This increases availability of the database system and Failure of any node does not impact the
health of Replication process. This topology also offers automatic conflict detection and correction. Hence, recommended in
Realtime.
Data and log files are initialized to overwrite any existing data left on the disk from previously deleted files. Data and log files are first initialized by filling the files with zeros when you perform one of the following operations:
• Create a database.
• Add files, log or data, to an existing database.
• Increase the size of an existing file (including autogrow operations).
• Restore a database or file group.
• File initialization causes these operations to take longer. However, when data is written to the files for the first time, the operating system does not have to fill the files with zeros.
Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted
SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by
adding them to the Perform Volume Maintenance Tasks security policy.
Ensure that SQL Browser is running and TCP/IP is enabled. Enforce TDE (Transparent Data Encryption) so that every data bit is
encrypted.
Data rows with all data, except text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data.
According to the documentation, you can only subscribe to a publication. However, assuming you have a publication which consists of several articles (tables) it is possible for each subscriber to subscribe to a subset of the publication. You create the publication and then set up the subscribers as per usual.
Then you drop the unnecessary articles on a per subscriber basis. This only works if your subscribers exist on separate servers, but if you need to do it, the procedure to run is:
sp_dropsubscription ‘publicationname’, ‘tablename’, ‘subscribername’
Or more simply just run sp_addsubscription on a per article basis!
Contains information related to Indexes. This holds index records in the upper levels of a clustered index and all levels of non-clustered indexes.
Merge Replication does NOT require a timestamp column (that’s for transactional replication). Merge Replication only
requires a ROWGUID (which must be the ROWGUIDCOL for the article). Second, the conflict resolver didn’t work the way I
expected it to: the conflict resolver works by using a default rule to resolve a conflict, but it still records the fact that a conflict
occurred. The record of the conflict (and the winning and losing records) are stored (by default) at the distributor. Manual
intervention is required (opening the conflict viewer) to view and eliminate the conflicts. But if you want to be alerted to the
existence of a conflict, you must write a separate monitoring tool.
Page number 0- the file header (page type 15). It holds the file header information. There will be only one header page for
each data file and that reside in the 0th position. The header page store the file level information like file size, minimum size,
max size and file growth.
A local subscriber is one that is defined with a priority setting used in conflict resolution. The priority setting is from 1 to 99.
PFS page is the second page (Page Number 1) in the data file followed by file header (Page Number 0). PFS pages are used to
track page level allocation. PFS page repeat after every 8088 pages
GAM pages records what extents have been allocated for any use. GAM has bit for every extent. If the bit is 1, the
corresponding extent is free, if the bit is 0, the corresponding extent is in use as uniform or mixed extent.
I have seen this a few times recently. In each case it occurred because of a restored database! Basically, there was an existing published database in the test environment which was correctly configured, worked normally and which showed up correctly in replication monitor. Someone then restored a backup from production on top of this test database without first removing the subscriptions and the publication.
Replication monitor at this stage shows the publication still existing but with an error. In this case running sp_removedbreplication, sp_droppublication, restarting the SQL Server service and the like will not solve the issue – still the replication monitor shows an error. I have tried removing some of the system metadata in the distribution database, which also failed to remove the error. Ultimately the only way I found to remove the publication from the replication monitor was to recreate a publication with exactly the same name and then delete it.
The dummy publication only needs the same name – the articles can be anything from the database – and once deleted the
replication monitor registers the change. No doubt there will be some sort of system proc to do this properly at some stage
and I’ll update this entry.
A GAM page can hold information of around 64000 extents. That is, a GAM page can hold information of (64000X8X8)/1024 =
4000 MB approximately. In short, a data file of size 7 GB will have two GAM pages.
You can limit the number of concurrent synchronizations; in merge this is done on the GUI or by setting
the @max_concurrent_merge property of sp_addmergepublication. For other agents you’ll need to make the necessary edits
to the registry.
SQL Server uses DCM pages to track extent modified after the last full backup operation. DCM pages track the extents
modified after the full backup.DCM has a bit for every extent it tracks. If the bit is 1, the corresponding extent is modified after
the last full backup, if the bit is 0, there is no change in the corresponding extent after the last full backup
Basically you need to add a column with a default constraint. After that the column is made nullable and the constraint can be removed. This worked for transactional publications. For merge it worked but I had to rerun the merge agent after it failed once. The code I used is below:alter table tXXX ADD Salary INT NOT NULL Default 0
go
alter table tXXX alter column Salary INT NULL
go
alter table tXXX drop constraint DF__tXXX__Salary__353DDB1D
go
A differential backup process scans through DCP page to identify the extents which are modified after the last full backup and
add those extents in the Differential backup.
Database mirroring can be used in conjunction with replication to provide availability for the publication database.
Log shipping can also be used in conjunction with replication.
This tracks the extents that have been modified by bulk logged operations since the last BACKUP LOG statement. If the bit for an extent is 1, the extent has been modified by a bulk logged operation after the last BACKUP LOG statement. If the bit is 0, the extent has not been modified by bulk logged operations.
Although BCM pages appear in all databases, they are only relevant when the database is using the bulk-logged recovery
model. In this recovery model, when a BACKUP LOG is performed, the backup process scans the BCMs for extents that have
been modified. It then includes those extents in the log backup.
Here’s a nice trick for you! Some of these procedures aren’t accessible using sp_helptext and they also aren’t accessible using the OBJECT_DEFINITION function. However if you use the Dedicated Admin Connection (DAC), you’ll be able to access the real text of the procedure:
SELECT object_definition(object_id(‘sys.sp_MSrepl_helparticlecolumns’))
The trick is to open up a connection using the DAC (when you open a query window to “yourservername” just use
“ADMIN:yourservername” instead).
There is one special data page that exists only once per database. It is the database boot page. The database boot page is always stored in page 9 of file 1, the first file in the primary file group.
The database boot page contains a broad variety of data about the database itself. Here you can find the internal database
version, the compatibility level, the database name and the date of the last log backup.
Open up the replication alerts folder, double click on the alert you are interested, click on the browse button (the three
ellipses), click on the edit button, select always write to the Windows Event Log.
There are three types of allocation units.
IN_ROW_DATA (default): If a table is relatively simple in desin (meaning record size is less than 8000 and no LOB data types are present), all records are stored in a pages refered as IN_ROW_DATA pages.
ROW_OVERFLOW_DATA: assume that a table is created with record size 12000 bytes having 4 varchar data types of size 4000 bytes. Whenever user inserts a record with size greater than 8000 (page size is 8K), then the excess data is moved to ROW_OVERFLOW_DATA pages. In simple terms, ROW_OVERFLOW_DATA pages will come in to picture only when the row size exceed page maximum limit.
LOB_DATA: LOB data like text not stored along with data pages. LOB data is stored in pecial pages called LOB_DATA pages. 16
byte pointer in data page will be used to refer LOB_DATA page.
The setting to enable firing of triggers during the initial bulk load is not exposed in SSMS, but it is configurable through
the @fire_triggers_on_snapshot parameter in sp_addarticle and the ‘fire_triggers_on_snapshot’ property
through sp_change_article.
IAM stands for Index Allocation Map: To catalog all the pages that belong to a single allocation unit, SQL Server utilizes a
special type of pages, the Index Allocation Map or IAM pages. Each allocation unit has at least one IAM Page; depending on
the size of the table there might be many. The page type of an IAM page is 10.
The Replication features of SQL Server 2000 (64-bit) are nearly identical to the 32-bit version, with the following features being
supported:Snapshot Replication
Transactional Replication
Merge Replication
Immediately updating subscribers
Queued updating subscribersHowever, there are a few special cases:
(1) as the Microsoft Jet engine is not supported, Microsoft Jet push subscriptions for merge replication are not supported
(2) unless the subscriber provides a 64-bit ODBC or OLE DB driver, transactional or snapshot push subscriptions for ODBC or OLE DB subscribers are not supported
(3) because of the unavailability of 64-bit Data Transformation Services (DTS), transformable push subscriptions are not
supported.
SQL Server stores all data in 8192-byte sized blocks called pages. Several types of pages are in use within a typical database. One particularly interesting group is formed by the type-3 pages or Large Object Pages.
LOB_DATA Allocation Units
Most data types in SQL Server take up no more than 8000 bytes of storage. However, there are a few data types, which allow for larger pieces of information to be stored. Examples include the VARCHAR(MAX), VARBINARY(MAX) or XML data types.
Normal data pages that belong to a table are grouped in IN_ROW_DATA allocation units. However, if a value that is larger than
8000 bytes needs to be stored, SQL Server does not attempt to store it in those data pages anymore. It does not even store
those values in the same allocation unit. Instead, Large Object data or LOB data is stored in special LOB_DATA allocation units.
The value of: “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent\Subsystems\Snapshot”
should be changed to:
C:\Program Files\Microsoft SQL Server\MSSQL\BINN\SQLREPSS.DLL,C:\Program Files\Microsoft
SQL
Server\80\COM\SNAPSHOT.EXE,ReplStart,ReplEvent,ReplStop,120
Page splits are performed when there is no space in a data page to accommodate more inserts or updates. In a page spilt, SQL
server removes some data from the current data page and moves it to another data page. This is done in order to make space
for the new rows.
In merge replication the same table may be added to each publication. However, one of the publications will need to be set up as a no-sync and if you are using automatic range management you’ll receive a PK error like this:
“Violation of PRIMARY KEY constraint ‘PK__MSrepl_identity___4D5F7D71′. Cannot insert duplicate key in object
‘MSrepl_identity_range’.”. So you’ll need to use manual range management for your table article.
Too many page splits can decrease the performance of the SQL server because of the large number of I/O operations.
Following remedies can be taken to avoid too many page splits:-
• Rebuild indexes frequently to empty the fill factor space for more data.
• Increased the Fill factor value after observing the page split behavior
Yes. There are no restrictions on the number or types of publications that can use the same distribution database. All publications from a given Publisher must use the same Distributor and distribution database
If you have multiple publications, you can configure multiple distribution databases at the Distributor to ensure that the data
flowing through each distribution database is from a single publication. Use the Distributor Properties dialog box or
sp_adddistributiondb (Transact-SQL) to add a distribution database.
Number of page splits can be observed by using the Performance Monitor and watch the SQLServer:Access Methods:Page
Splits/sec counter
Yes, replication does work over low bandwidth connections. For connections over TCP/IP, it uses the compression provided by
the protocol but does not provide additional compression. For Web synchronization connections over HTTPS, it uses the
compression provided by the protocol and also additional compression of the XML files used to replicate changes.
The fill-factor option is provided for fine-tuning index data storage and performance. When an index is created or rebuilt, the fill-factor value determines the percentage of space on each leaf-level page to be filled with data, reserving the remainder on each page as free space for future growth.
For example, specifying a fill-factor value of 80 means that 20 percent of each leaf-level page will be left empty, providing
space for index expansion as data is added to the underlying table. The empty space is reserved between the index rows
rather than at the end of the index.
By default, replication does not execute GRANT statements on the subscription database. If you want the permissions on the subscription database to match those on the publication database, use one of the following methods:
Logical Reads: Logical read indicates total number of data pages that are found in the data cache and read from Data cache
without reading anything from Disk.
Merge replication and snapshot replication do not affect transaction log size, but transactional replication can. If a database
includes one or more transactional publications, the log is not truncated until all transactions relevant to the publications have
been delivered to the distribution database. If the transaction log is growing too large, and the Log Reader Agent is running on
a scheduled basis, consider shortening the interval between runs. Or, set it to run in continuous mode. If it is set to run in
continuous mode (the default), ensure that it is running.
Physical Reads: Physical read indicates total number of data pages that are read from disk. In case no data in data cache, the
physical read will be equal to number of logical read. And usually it happens for first query request. And for subsequent same
query request the number will be substantially decreased because the data pages have been in data cache.
You should follow these steps:
Use the sp_replmonitorsubscriptionpendingcmds stored procedure or the Undistributed Commands tab in Replication Monitor. The stored procedure and tab display:
A transaction log is a physical file in which SQL server stores the details of all transactions and data modifications performed on the database. In the event of disaster, that causes SQL server to shutdown unexpectedly (Power failure/hardware failure), the transaction log is used to bring the database in a consistent state while restarting the server. On restarting the server, database goes through the recovery process.
During this recovery process, the transaction log is used to make sure that all committed transactions are written to respective data file (rolled forward) and rollback the uncommitted transaction.
Logically transaction log is a set of log records. Each record is identified by a log sequence number (LSN). The new log records
are always written at the logical end of log file with a LSN which is greater than the previous one.
I think SQL Server Profiler is one of the must have tools while dealing with performance issues. This tool is available with SQL Server since a long time. SQL Server Profiler can perform various significant functions such as tracing what is running and finding out how queries are resolved internally. The major functions this tool can perform have been listed below:
SQL Server Database Engine divides each physical log file internally into a number of virtual log files. Virtual log files have no
fixed size, and there is no fixed number of virtual log files for a physical log file. The Database Engine chooses the size of the
virtual log files dynamically while it is creating or extending log files. The Database Engine tries to maintain a small number of
virtual files. The size of the virtual files after a log file has been extended is the sum of the size of the existing log and the size
of the new file increment. The size or number of virtual log files cannot be configured or set by administrators.
One should never leave profiler trace running on a production server. Profiler adds too much overhead to production server.
And Profiler collects surplus of data making it extremely complex to analyze them afterwards.
SQL server decides the size and number of VLF in a log file based on the size of the log file growth as given below.
Growth up to 64 MB = 4 VLF
From 64 MB to 1 GB = 8 VLF
Larger than 1 GB = 16 VLF
No, one should never leave profiler trace running on forever on server. It hardly matters if a profiler trace is running from A to
B or directly on B.
Any time you open SQL Server Profiler and run a trace, you’re running a client-side trace. To run a server-side trace, one needs to create a script. Server side trace sometimes provides more flexibility as compared to profiler trace. These four procedures can be used to create Server side traces.
Every record in the SQL Server transaction log is uniquely identified by a log sequence number (LSN). LSNs are ordered such
that if LSN2 is greater than LSN1, the change described by the log record referred to by LSN2 occurred after the change
described by the log record LSN.
Log truncation is essential to keep the log from filling. Log truncation deletes inactive virtual log files from the logical
transaction log of a SQL Server database, freeing space in the logical log for reuse by the physical transaction log. If a
transaction log were never truncated, it would eventually fill all the disk space that is allocated to its physical log files.
However, before the log can be truncated, a checkpoint operation must occur. A checkpoint writes the current in-memory
modified pages (known as dirty pages) and transaction log information from memory to disk. When the checkpoint is
performed, the inactive portion of the transaction log is marked as reusable. Thereafter, the inactive portion can be freed by
log truncation.
dm_exec_query_stats
dm_exec_sql_text
dm_exec_query_plan
Log truncation occurs automatically after the following events, except when delayed for some reason:
• Under the simple recovery model, after a checkpoint.
• Under the full recovery model or bulk-logged recovery model, after a log backup, if a checkpoint has occurred since
the previous backup.
The outcome of any function is evaluated at the run time, so the SQL Server Query engine has to scan the whole table to get
necessary data.
• SQL Server Service
• SQL Server Agent service
• SQL Server Analysis Service
• SQL Server Browser service
• SQL Server Integration Service
• SQL Server Reporting Service
• SQL Server VSS Writer service
• Distributed Transaction Coordinator
• SQL Server Active Directory Helper
• SQL Full-Text Filter Daemon Launcher
• SQL Server Distributed Replay Client service
• SQL Server Distributed Replay Client service
SQL Server service is core of SQL Server instance. It runs the Database Engine and executes the client requests related to data
processing. If this service is not running, no users can connect to the any of the database, hence users will not be able to
fetch, insert, update or delete the data
SQL Server Agent is the primary scheduling engine in SQL Server. This is used to execute scheduled administrative tasks like
SSIS Packages, T-SQL Scripts, Batch Files and Subscriptions etc. which are referred to as Jobs. It uses msdb database to store
the configuration, processing, and metadata information. Apart from SQL Server Agent related information, msdb database
also stores similar information related to Backup, Restore, Log Shipping, SSIS Packages etc.
A non-clustered index that is extended to include non-key columns in addition to the key columns.
Microsoft SQL Server Analysis Services (SSAS) delivers online analytical processing (OLAP) and data mining functionality for
business intelligence applications. Analysis Services supports OLAP by letting you design, create, and manage multidimensional
structures that contain data aggregated from other data sources, such as relational databases. For data mining applications,
Analysis Services lets you design, create, and visualize data mining models that are constructed from other data sources by
using a wide variety of industry-standard data mining algorithms.
SQL Server Integration Services (SSIS) is a component of the Microsoft SQL Server database software that can be used to
perform a broad range of data migration tasks. SSIS is a platform for data integration and workflow applications. It features a
fast and flexible data warehousing tool used for data extraction, transformation, and loading (ETL). The tool may also be used
to automate maintenance of SQL Server databases and updates to multidimensional cube data.
Index seek is an operation that only touches rows that qualify and pages that contain these qualifying rows.
This service acts as a listener for the incoming requests for Microsoft SQL Server resources. It provides information about the list of installed SQL Server instances on the computer to the client computers/applications. It helps in browsing the list of servers, locating and connecting to the correct server.
This listener service responds to client requests with the names of the installed instances, and the ports or named pipes used
by the instance.
In database mirroring, the transaction log records for a database are directly transferred from one server to another, thereby
maintaining a hot standby server. As the principal server writes the database’s log buffer to disk, it simultaneously sends that
block of log records to the mirror instance. The mirror server continuously applies the log records to its copy of the database.
Mirroring is implemented on a per-database basis, and the scope of protection that it provides is restricted to a single-user
database. Database mirroring works only with databases that use the full recovery model.
Index scan is an operation that touches every row in the table whether or not it qualifies.
This service is primarily used by SQL Server Reporting Services (SSRS) for browsing and viewing the reports on Reports Server,
through Report Server or Report Manager interface. It is used to manage the shared data sources, reports, shared data sets,
report parts, folder, etc. hosted on the Report Server. Reporting services are managed using the Reporting Services
Configuration Manager.
SQL Server avoids compilations of previously executed queries by using these four mechanisms to make plan caching accessible:
The SQL Writer Service provides added functionality for backup and restore of SQL Server through the Volume Shadow Copy Service framework. When running, Database Engine locks and has exclusive access to the data files. When the SQL Writer Service is not running, backup programs running in Windows do not have access to the data files, and backups must be performed using SQL Server backup.
Use the SQL Writer Service to permit Windows backup programs to copy SQL Server data files while SQL Server is running. It
must be running when the Volume Shadow Copy Service (VSS) application requests a backup or restore. To configure the
service, use the Microsoft Windows Services applet. The SQL Writer Service installs on all operating systems.
Executing DBCC FREESYSTEMCACHE clears the plan cache for the instance of SQL Server.
SQL Writer supports:
• Full database backup and restore including full-text catalogs
• Differential backup and restore
• Restore with move
• Copy-only backup
• Auto-recovery of database snapshot
DBCC FREESYSTEMCACHE
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
SQL Writer does not support:
• Log backups
• File and filegroup backup
• Page restore
If there are more than one processor or CPUs on a server where SQL server is running, SQL server detects the best degree of
parallelism, that is, the number of processors employed to run a single statement, for each parallel plan execution.
This service is used by the full-text search feature of SQL Server. It helps in starting the filter daemon host process, which
manages the full-text indexing, querying, search filtering and word processing as part of the full-text search feature.
Principal server is the server which serves the databases requests to the Application.
MAXDOP Setting can be configured at server level properties
And using exec dbo.sp_configure
This service enables the integration with the Active Directory. Irrespective of number of instances of SQL Servers installed on a
computer, there is always only one instance of SQL Server Active Directory Helper service. This service is automatically started
by SQL Server when required and is stopped once the operation is completed. This service is required whenever an SQL Server
object needs to be created in the Active Directory to register an instance of SQL Server.
True – This functionality is possible with SQL Server Profiler.
This service coordinates distributed transactions between two or more database servers. Client applications use this service to
work with data from multiple sources in one transaction. There is always only one instance of MSDTC service running on a
computer irrespective of how many SQL server instances are installed. This service should be running on each of the servers
which handle distributed transactions. This service is not a part of SQL Server installation. This service is installed with
Windows OS installation.
This is an optional server. If the Principal server goes down then Witness server controls the fail over process.
A query plan is the physical break down of the code being passed to the SQL Server optimizer. The value from a performance
tuning perspective is that each component of the query can be understood and the percentage of resource utilization can be
determined at a micro level. As query tuning is being conducted, the detailed metrics can be reviewed to compare the
individual coding techniques to determine the best alternative.
There are multiple ways through which we can check the No. of SQL Server instances which are running on a server like:
• Check the SQL services for different Instances
• SQL Server Configuration Manager Start- all programs – Microsoft SQL Server 2008 R2 -> configuration tools –> Microsoft SQL Server configuration Manager
• List out SQL Services from Net Start command
• Using Powershell commands
• Readging the Registry Keys, Regedit-> HKEY_LOCAL_MACHINE–>SOFTWARE ->Microsoft ->Microsoft SQL Server –>
InstalledInstance
In synchronous mode, committed transactions are guaranteed to be recorded on the mirror server. Should a failure occur on the primary server, no committed transactions are lost when the mirror server takes over. Using synchronous mode provides transaction safety because the operational servers are in a synchronized state, and changes sent to the mirror must be acknowledged before the primary can proceed.
In asynchronous mode, committed transactions are not guaranteed to be recorded on the mirror server. In this mode, the
primary server sends transaction log pages to the mirror when a transaction is committed. It does not wait for an
acknowledgement from the mirror before replying to the application that the COMMIT has completed. Should a failure occur
on the primary server, it is possible that some committed transactions may be lost when the mirror server takes over.
False, this is just a myth.
There are different ways through which we can start or stop SQL Server services.
a) Go to Services –> Look for SQL server service related to the Instance
Named Instance: SQL Server(Instance Name)
Default Instance: SQL Server(MSSQLServer)
b) Right Click on the SQL Server instance in management studio and click on restart
c) Go to SQL Server Configuration Mananger (SQLServermanager10.msc) and right click on the services and click restart.
d) Net stop command
e) use “Net START” command to list all the running services
f) Use “Net STOP MSSQL$Instancename” to stop the SQL Service for a particular instance
SQL Server provides 3 operating modes for database mirroring.
The NOLOCK query hint allows SQL Server to ignore the normal locks that are placed and held for a transaction allowing the
query to complete without having to wait for the first transaction to finish and therefore release the locks. This is one short
term fix to help prevent locking, blocking or deadlocks. However, when the NOLOCK hint is used, dirty data is read which can
compromise the results returned to the user.
master database data and log file and error log files are the default parameters which are passed to SQL Server service.
-dC:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\master.mdf;
-eC:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG;
-lC:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
We can add trace flags and other parameters to the startup process from SQL Server Configuration manager.
It consist of the Principal, Witness and Mirror in synchronous communication. In this mode SQL server ensures that each
transaction that is committed on the Principal is also committed in the Mirror prior to continuing with next transactional
operation in the principal. The cost of this configuration is high as Witness is required. If the network does not have the
bandwidth, a bottleneck could form causing performance issue in the Principal. If Principal is lost Mirror can automatically take
over.
If there are any configuration problems that prevent the server from starting, you can start an instance of Microsoft SQL Server by using the minimal configuration startup option. This is the startup option -f. Starting an instance of SQL Server with minimal configuration automatically puts the server in single-user mode
It is pretty similar to High Availability mode except that Witness is not available, as a result failover is manual. It also has
transactional safety FULL i.e. synchronous communication between principal and mirror. Even in this mode if the network is
poor it might cause performance bottleneck.
Under certain circumstances, we may have to start an instance of SQL Server in single-user mode by using the startup option -
m. For example, you may want to change server configuration options or recover a damaged master database or other system
database. Both actions require starting an instance of SQL Server in single-user mode.
It consists of only the Principal and the Mirror in asynchronous communication. Since the safety is OFF, automatic failover is not
possible, because of possible data loss; therefore, a witness server is not recommended to be configured for this scenario.
Manual failover is not enabled for the High Performance mode. The only type of failover allowed is forced service failover, which
is also a manual operation.
A deadlock is a situation where 2 spids have data locked and cannot release their lock until the opposing spid releases their
lock. Depending on the severity of the dead lock, meaning the amount of data that is locked and the number of spids that are
trying to access the same data, an entire chain of spids can have locks and cause a number of deadlocks, resulting in a
performance issue.
Under certain circumstances, we may have to start an instance of SQL Server in single-user mode by using the startup option -
m. For example, you may want to change server configuration options or recover a damaged master database or other system
database. Both actions require starting an instance of SQL Server in single-user mode.
Database Mirroring is supported with Full Recovery model.
Deadlocks can be identified by Profiler in either textual, graphical or XML format
Trace flags are used to temporarily set specific server characteristics or to switch off a particular behavior. For example, if
trace flag 3205 is set when an instance of SQL Server starts, hardware compression for tape drivers is disabled. Trace flags are
frequently used to diagnose performance issues or to debug stored procedures or complex computer systems.
An endpoint is a network protocol which is used to communicate Principal, Mirror
and Witness servers over the network.
Majorly Deadlocks are Database Design problem but they are a performance problem as well because they can prevent 2 or
more processes from being able to process data. A deadlock chain can occur and impact hundreds of spids based on the data
access patterns, number of users, object dependencies, etc. Deadlocks could require a database design change, T-SQL coding
change to access the objects in the same order, separating reporting and OLTP applications, including NOLOCK statements in
SELECT queries that can accept dirty data, etc.
Trace flags are enabled at different levels.
• Global
• Session
Instance level : We can use the -T option in the startup configuration for the SQL Server Service to enable trace at instance level.
Session Level: We can use the DBCC TRACEON and DBCC TRACEOFF commands to enable it on a session level.
The default port numbers of principal, mirror and Witness servers are 5022, 5023 and 5024
The query plan is chosen based on the parameters and code being issued to the SQL Server optimizer. Unfortunately, a
slightly different query plan can cause the query to execute much longer and use more resources than another query with
exactly the same code and only parameter differences.
We can use the DBCC TRACESTATUS command
Log hardening is the process of writing the log buffer to the transaction log on disk, a process called.
The OPTIMIZE FOR hint can be used to specify what parameter value we want SQL Server to use when creating the execution
plan.
Trace Flag: 1204
This trace flag lets SQL Server to log detailed deadlock information to SQL Server Error Log in text format. In SQL Server 2008, this trace flag is only available at the Global Level (i.e. applies to the SQL Server instance). In my experience, it is worth turning this trace flag on, only for debugging purposes
Trace Flag: 1222
Similar to trace flag 1204, this trace flag lets SQL Server to log detailed deadlock information to SQL Server Error Log in XML format.
Trace Flag: 3226
In an environment where database backup is frequently performed, it is a good idea to turn trace flag 3226 on, to suppress an
entry to the SQL Server Error Log for each backup. This is beneficial as the bigger the log, the harder it is to find other
messages. However, you will need to ensure that none of your scripts or systems rely on the backup entry detail on the SQL
Server Error Log.
Yes, using database snapshots.
master, model, resource db, tempdb location.
Inter changing of roles like principal and mirror are called role switching.
ALTER DATABASE AdventureWorks SET WITNESS = 'TCP://SQLWITN.local:5024'
SQL Server Browser service uses UDP protocol.
ALTER DATABASE AdventureWorks SET WITNESS OFF
Once you know all the tools that you have – here is a real life scenario.
How do you approach or troubleshoot performance problems in SQL Server?
There are different ways of troubleshooting performance issues and there is no right or wrong way of troubleshooting performance issues, it is just my way of approaching in identifying the problem and then taking corrective actions.
If the SQL Server Browser service is not running, the following connections do not work:
• If we have just one instance installed on machine and it is running on default port 1433, then status of SQL Server Browser service does not make any difference in our connection parameters.
• If there are more than one instances running on the same machine, in that case either we have to start SQL Server Browser service or provide the port number along with IP (or server name) and instance name, to access any other instance than default.
• If SQL Server Browser service is stopped and IP along with port number is not provided, then connection will be refused.
• If SQL Server instance is configured using dynamic ports, then browser service is required to connect to correct port number.
• Also our named instances will not be published in the list of SQL Server instances on the network (which could be a
good thing)
The contents of the mirror database are lagging behind the contents of the principal database. The principal server is sending
log records to the mirror server, which is applying the changes to the mirror database to roll it forward. At the start of a
database mirroring session, the database is in the SYNCHRONIZING state. The principal server is serving the database, and the
mirror is trying to catch up.
• The service is authenticated by verifying the credentials provided in the logon account and the service is started.
• PID is allocated at windows level
• Authentication mode details are verified i.e either MIXED or WINDOWS
• Information of the startup parameters is captured i.e mdf location of master database, SQL Server error log location and ldf file location
• Some memory and CPU settings done at windows level, this is an informational message only
• Starts the master database
• model is the next database to start
• set the port related information
• Tempdb is recreated each time when we restart sql server
• Start msdb and other user databases based on dbid
When the mirror server becomes sufficiently caught up to the principal server, the mirroring state changes to SYNCHRONIZED. The database remains in this state as long as the principal server continues to send changes to the mirror server and the mirror server continues to apply changes to the mirror database.
If transaction safety is set to FULL, automatic failover and manual failover are both supported in the SYNCHRONIZED state, there is no data loss after a failover.
If transaction safety is off, some data loss is always possible, even in the SYNCHRONIZED state.
Distributed Replay is a new functionality of Microsoft SQL Server 2012. It helps you assess the impact of future upgrades (SQL Server, hardware, OS) or SQL Server tunings by replaying a trace captured from a productive SQL Server environment to a new SQL Server test environment.
This new functionality is similar to SQL Server Profiler, but with more possibilities: e. g. replaying the trace from multiple
clients (up to sixteen), use a stress or synchronization mode, configure options like think time, or connect time etc.
The mirror copy of the database is not available. The principal database is running without sending any logs to the mirror
server, a condition known as running exposed. This is the state after a failover. A session can also become SUSPENDED as a
result of redo errors or if the administrator pauses the session. SUSPENDED is a persistent state that survives partner
shutdowns and startups.
The following components make up the Distributed Replay environment:
Distributed Replay administration tool: A console application, DReplay.exe, used to communicate with the distributed replay controller. Use the administration tool to control the distributed replay.
Distributed Replay controller: A computer running the Windows service named SQL Server Distributed Replay controller. The Distributed Replay controller orchestrates the actions of the distributed replay clients. There can only be one controller instance in each Distributed Replay environment.
Distributed Replay clients: One or more computers (physical or virtual) running the Windows service named SQL Server Distributed Replay client. The Distributed Replay clients work together to simulate workloads against an instance of SQL Server. There can be one or more clients in each Distributed Replay environment.
Target server: An instance of SQL Server that the Distributed Replay clients can use to replay trace data. We recommend that
the target server be located in a test environment.
This state is found only on the principal server after a failover has begun, but the server has not transitioned into the mirror role.
When the failover is initiated, the principal database goes into the PENDING_FAILOVER state, quickly terminates any user
connections, and takes over the mirror role soon thereafter.
SQL Server Agent is a Microsoft Windows service that executes scheduled administrative tasks, TSQL Statements, SSIS packages,
Reports subscriptions etc.
The partner has lost communication with the other partner
• Jobs
• Schedules
• Operators
• Alerts
Msg 1416, Level 16, State 31, Line 3. Database is not configured for database mirroring
We need to restore the Full backup from principal server using With NoRecovery option and also one transactional log backup
from principal server using With NoRecovery option and then start configuring mirroring.
A job is a specified series of actions that SQL Server Agent performs. We can use jobs to define an administrative task that can be run one or more times and monitored for success or failure. A job can run on one local server or on multiple remote servers.
We can run jobs in several ways:
• According to one or more schedules.
• In response to one or more alerts.
• By executing the sp_start_job stored procedure.
Each action in a job is a job step. For example, a job step might consist of running a Transact-SQL statement, executing an SSIS
package, or issuing a command to an Analysis Services server. Job steps are managed as part of a job
No, It is not possible.
An operator defines contact information for an individual responsible for the maintenance of one or more instances of SQL Server. In some enterprises, operator responsibilities are assigned to one individual. In enterprises with multiple servers, many individuals can share operator responsibilities. An operator does not contain security information, and does not define a security principal.
SQL Server can notify operators of alerts through one or more of the following:
• Pager (through e-mail)
• net send
A database snapshot is a view of what the source database looked like at the time at which the snapshot was created. This
means that all the objects will be the same as what it was when the snapshot was taken and all of the data will be exactly as it
was then. To use database snapshots to recover from an unwanted DML statement, you need to have a suitable snapshot in
place. Snapshots can only be created by using a T-SQL statement.
A schedule specifies when a job runs. More than one job can run on the same schedule, and more than one schedule can apply to the same job. A schedule can define the following conditions for the time when a job runs:
• Whenever SQL Server Agent starts.
• Whenever CPU utilization of the computer is at a level you have defined as idle.
• One time, at a specific date and time.
• On a recurring schedule
CREATE DATABASE AdventureWorks2012_S1 ON (NAME = [AdventureWorks2012_Data], FILENAME = 'D:\
MSSQL\DATA\AdventureWorks2012_Data_SS_1.ss') AS SNAPSHOT OF AdventureWorks2012
An alert is an automatic response to a specific event. For example, an event can be a job that starts or system resources that reach a specific threshold. You define the conditions under which an alert occurs.
An alert can respond to one of the following conditions:
• SQL Server events
• SQL Server performance conditions
• Microsoft Windows Management Instrumentation (WMI) events on the computer where SQL Server Agent is running
• An alert can perform the following actions:
• Notify one or more operators
• Run a job
High level tasks of snapshot involves
system database- msdb
When a database snapshot is created, a sparse file is added for each database file in the database of which the snapshot was
taken. A sparse file is basically an empty file. It does not contain any data until a change is made to the source database
What question do you ask Developers or Client to understand more about the performance issue?
Below are some examples of the tables in msdb database which contains information about the Jobs and its steps.
• Sysjobs: Stores the information for each scheduled job to be executed by SQL Server Agent
• Sysjobsteps: Contains the information for each step in a job to be executed by SQL Server Agent.
• Sysjobschedules: It contains schedule information for jobs to be executed by SQL Server Agent
How do you troubleshoot slowness with a specific Stored Procedure or a Query?
SQL Server contains 3 fixed database roles on the MSDB database, which gives administrators fine control over access to SQL Server Agent. The SQL Server Agent node in SSMS is visible only to users in one of these 3 roles (except sysadmins, who can see everything irrespective of role membership). Here is an explanation of the roles, in order from the most restrictive to least restrictive:
SQLAgentUserRole – Users in this role are granted view/edit/delete/execute access to only jobs owned by them. Users in this role cannot view any jobs owned by system administrators, or by users in the other two roles. Grant this role when you want users to only see jobs owned by them.
SQLAgentReaderRole – Users in this role get all the privileges of theSQLAgentUserRole, i.e. they get access to owned jobs. In addition to that, they can also view (but not modify or execute) all jobs on SQL Server Agent, irrespective of ownership. Grant this role when you want users to be able to view, but not execute, all jobs in the system, but modify/execute only jobs owned by them.
SQLAgentOperatorRole – Users in this role get all the privileges of the SQLAgentReaderRole. In addition to that, they can also
execute, or enable/disable any job in the system. However, users in this role can modify only owned jobs. Grant this role for
super users who can view/execute all jobs on the system.
Isolation level defines the degree to which one transaction must be isolated from resource or data modifications made by
other transactions.
SQL Mail has been removed in SQL Server 2012 version. It has been replaced by Database Mail option.
Database Mail is not active by default. To use Database Mail, We must explicitly enable Database Mail by using either the
Database Mail Configuration Wizard, the sp_configure stored procedure, or by using the Surface Area Configuration facet of
Policy-Based Management.
A query in the current transaction cannot read data modified by another transaction that has not yet committed, thus preventing
dirty reads. However, data can still be modified by other transactions between issuing statements within the current transaction,
so nonrepeatable reads and phantom reads are still possible. The isolation level uses shared locking or row versioning to prevent
dirty reads, depending on whether theREAD_COMMITTED_SNAPSHOT database option is enabled. Read Committed is the
default isolation level for all SQL Server databases.
A Database Mail account contains the information that Microsoft SQL Server uses to send e-mail messages to an SMTP server. Each account contains information for one e-mail server.
A Database Mail supports three methods of authentication to communicate with an SMTP server:
Windows Authentication: Database Mail uses the credentials of the SQL Server Database Engine Windows service account for authentication on the SMTP server.
Basic Authentication: Database Mail uses the username and password specified to authenticate on the SMTP server.
Anonymous Authentication: The SMTP server does not require any authentication. Database Mail will not use any credentials
to authenticate on the SMTP server.
A query in the current transaction can read data modified within another transaction but not yet committed. The database
engine does not issue shared locks when Read Uncommitted is specified, making this the least restrictive of the isolation levels.
As a result, it’s possible that a statement will read rows that have been inserted, updated or deleted, but never committed to
the database, a condition known as dirty reads. It’s also possible for data to be modified by another transaction between issuing
statements within the current transaction, resulting in nonrepeatable reads orphantom reads.
A Database Mail profile is an ordered collection of related Database Mail accounts. Applications that send e-mail using Database Mail specify profiles, instead of using accounts directly. Separating information about the individual e-mail servers from the objects that the application uses improves flexibility and reliability: profiles provide automatic failover, so that if one e-mail server is unresponsive, Database Mail can automatically send mail to another e-mail server. Database administrators can add, remove, or reconfigure accounts without requiring changes to application code or job steps.
Profiles also help database administrators control access to e-mail. Membership in the DatabaseMailUserRole is required to
send Database Mail. Profiles provide additional flexibility for administrators to control who sends mail and which accounts are
used.
No it’s not possible, both principal and mirror should have same edition
A query in the current transaction cannot read data modified by another transaction that has not yet committed, thus
preventing dirty reads. In addition, no other transactions can modify data being read by the current transaction until it
completes, eliminating nonrepeatable reads. However, if another transaction inserts new rows that match the search
condition in the current transaction, in between the current transaction accessing the same data twice, phantom rows can
appear in the second read.
Public profiles are available for all members of the DatabaseMailUserRole database role in the msdb database. They allow all members of the DatabaseMailUserRole role to send e-mail using the profile.
Private profiles are defined for security principals in the msdb database. They allow only specified database users, roles, and
members of the sysadmin fixed server role to send e-mail using the profile. By default, a profile is private, and allows access only to members of the sysadmin fixed server role. To use a private profile, sysadmin must grant users permission to use the profile.
Additionally, EXECUTE permission on the sp_send_dbmail stored procedure is only granted to members of the
DatabaseMailUserRole. A system administrator must add the user to the DatabaseMailUserRole database role for the user to
send e-mail messages
A query in the current transaction cannot read data modified by another transaction that has not yet committed. No other
transaction can modify data being read by the current transaction until it completes, and no other transaction can insert new
rows that would match the search condition in the current transaction until it completes. As a result, the Serializable isolation
level prevents dirty reads, nonrepeatable reads, and phantom reads. However, it can have the biggest impact on performance,
compared to the other isolation levels.
SQL Server agent jobs run with the account assigned to SQL Agent service. In case if we have to perform some task which requires
some elevated permissions we can use SQL Server Agent Proxy. Proxy is about having additional security. We can specify the
job step for which we need a different security context to be using the security context of the specified proxy.
A statement can use data only if it will be in a consistent state throughout the transaction. If another transaction modifies data
after the start of the current transaction, the data is not visible to the current transaction. The current transaction works with
a snapshot of the data as it existed at the beginning of that transaction. Snapshot transactions do not request locks when
reading data, nor do they block other transactions from writing data. In addition, other transactions writing data do not block
the current transaction for reading data. As with the Serializable isolation level, the Snapshot level prevents dirty reads,
nonrepeatable reads and phantom reads.
Yes, we can run a job with multiple schedules.
Read committed (Database Engine default level)
Yes, we can list all the schedules of all the jobs from below option.
Click on SQL Server Agent à Jobsà Right Click on Jobsà Click on Manage Schedules
SELECT Mirroring_Connection_Timeout FROM sys.database_mirroring WHERE database_id = db_id('Database Names')
Transaction isolation levels control:
Yes, it is possible to run an SSIS package by selecting the Type as SQL Server Integration Services Package in the Step.
By default its 10 sec.
ALTER DATABASE AdventureWorks SET PARTNER TIMEOUT 30
Reading uncommitted modifications are called Dirty Reads. Values in the data can be changed and rows can appear or
disappear in the data set before the end of the transaction, thus getting incorrect or wrong data is possible.
Yes, it is possible to run it using Operating System Type in the job step.
If secondary is down principle or Mirror show status disconnected
Data getting changed in current transaction by other transactions is called Phantom Reads. New rows can be added by other
transactions, so you get different number of rows by firing same query in current transaction.
These settings can be used to setup the notification with regard to failure or success of the job.
Select this option to send e-mail when the job completes. After selecting this option, choose the operator to notify and the condition that will trigger the notification: When the job succeeds; When the job fails; or When the job completes.
Page
Select this option to send e-mail to an operator’s pager when the job completes. After selecting this option, specify the operator to notify and the condition that will trigger the notification: When the job succeeds; When the job fails; or When the job completes.
Net send
Select this option to use net send to notify an operator when the job completes. After selecting this option, specify the operator to notify and the condition that will trigger the notification: When the job succeeds; When the job fails; or When the job completes.
Write to the Windows Application event log
Select this option to write an entry in the application event log when the job completes. After selecting this option, specify the condition that will cause the entry to be written: When the job succeeds; When the job fails; or When the job completes.
Automatically delete job
Select this option to delete the job when the job completes. After selecting this option, specify the condition that will trigger
deletion of the job: When the job succeeds; When the job fails; or When the job completes.
If principle is down mirror will be disconnected with in recovery instead of synchronized with restoring
Jobs which runs Database maintenance tasks like”
Index Rebuild\Reorganise tasks
Update stats tasks
Database shrink activities
If mirroring is set to paused from principle then both principle & mirror in suspending
Hints are options or strategies specified for enforcement by SQL Server query processor on SELECT, INSERT, UPDATE, or
DELETE statements. These hints override any execution plan that query optimizer might select for a query.
By default SQL Server doesn’t hold a very long job history for your server. The defaults that SQL Server comes with are as follows:
That is the reason sometimes we are not able to get the job history details due to purging of the job history.
There are six methods are available for monitoring the Database Mirroring
a) Database Mirroring Monitor:- Database Mirroring Monitor is a GUI tool that shows update status and to configure warning thresholds.
To open DM Monitor:- Right click on Principal Database > Tasks > Select Launch Database Mirroring Monitor.
b) SQL Server Management Studio:- A green arrow on the mirror server is indicates running well. A red arrow indicates problems that need to investigate.
c) SQL Server Log:- It provides information of Mirroring establishment and status. If any errors occur it will be logged to SQL Server log and Windows event log.
d) Performance Monitor:- It can provides real-time information about Database mirroring. We can use performance counters to get status of the database mirroring such as Bytes received/sec, Bytes sent/sec, Transaction delay etc.
e) Profiler:- Profiler many events are providing the status of the Database mirroring
f) System Stored Procedures:-
sp_dbmmonitoraddmonitoring
sp_dbmmonitorchangemonitoring
sp_dbmmonitorhelpmonitoring
sp_dbmmonitordropmonitoring
Yes, We can change the retention of the job history. Right Click on SQL Server Agent à Click on Properties à History. There are
options to set the maximum job history log size.
Data files will be added to the Mirror database if the identical location is present at the mirrored server without any issues.
SQL Server compiles the stored procedures using (sniffing) the parameters send the first time the procedure is compiled and
put it in plan cache. After that every time the procedure executed again, SQL Server retrieves the execution plan from the cache
and uses it (unless there is a reason for recompilation). The potential problem arises when the first time the stored procedure
is executed, the set of parameters generate an acceptable plan for that set of parameters but very bad for other more common
sets of parameters.
Database Mail is part of the SQL Server Service which means that the SQL Server Agent cannot automatically know that it’s present and active and which settings to use. Therefore although you may have set up your Operators and are able to select them within the Notifications tab of a scheduled task, you still have to tell the SQL Server Agent which email account and profile it can use.
Check on “Enable mail profile” option and mention the profile name which needs to be used under the Alert System tab of
SQL Server agent properties and Restart the SQL Server agent service.
Database mirroring will not work in this case. We have to break the Mirroring and take a Log backup of the Primary database
and restore it on the Mirror database using “WITH MOVE” and “NORECOVERY” option. Once it’s done then Reestablish the
mirroring.
There are some workarounds to overcome this problem.
We can enable a fail-safe operator feature that will receive all emails in the event that SQL Server Agent cannot communicate with the system tables in the msdb database. This is accomplished by recording the information about the fail-safe operator in the registry. The fail-safe operator will also receive notifications if you have scheduled operators to only receive notifications during certain time periods and a notification occurs outside of that range. For example, if no operators are set to receive notifications on Sunday, the notification will automatically go to the fail-safe operator.
To enable a fail-safe operator, right-click SQL Server Agent and select Properties from the context menu. Then select the Alert System page as shown in the following image.
Select the Enable fail-safe operator checkbox and then choose the appropriate operator and notification method. Click OK to
save the changes.
Yes, It will be copied to the Mirror database with the permissions. However you have to create the login manually with the same
SID.s
SQL Server uses locks as a mechanism to bring the consistency at the same time build the concurrency for the system. Locks
are inevitable in any database world and those are also fundamental building blocks.
SSMS stands for SQL Server Management Studio. SQL Server Management Studio (SSMS) is an integrated environment for
accessing, configuring, managing, administering, and developing all components of SQL Server. SSMS combines a broad group
of graphical tools with a number of rich script editors to provide access to SQL Server to developers and administrators of all
skill levels. SSMS combines the features of Enterprise Manager, Query Analyzer, and Analysis Manager, included in previous
releases of SQL Server, into a single environment. In addition, SSMS works with all components of SQL Server such as Reporting
Services and Integration Services.
Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. We can capture and save data about each event to a file or table to analyze later. For example, we can monitor a SQL Server database to see which stored procedures are affecting performance by executing too slowly. SQL Server Profiler is used for activities such as:
• Stepping through problem queries to find the cause of the problem.
• Finding and diagnosing slow-running queries.
• Capturing the series of Transact-SQL statements that lead to a problem. The saved trace can then be used to replicate the problem on a test server where the problem can be diagnosed.
• Monitoring the performance of SQL Server to tune workloads using Database Engine Tuning Advisor.
Database mirroring provides a feature of automatically redirection of the Application connection to Mirror database in case of Primary database failures. Database connections configured as below to enable this functionality.
NET Example
Data Source=SQLA;Failover Partner=SQLB;Initial Catalog = AdventureWorks2008R2 ; Integrated Security=True
Shared locks (S): Used when performing read-only operations against database. Resources locked with a shared lock are
available for SELECT, but not for modification.
SQL Server includes another performance tool called the Database Engine Tuning Advisor or DTA. This tool allows you to have SQL Server analyze one statement or a batch of statements that you captured by running a Profiler or server side trace. The tool will then go through each statement to determine where improvements can be made and then presents you with options for improvement. The Database Engine Tuning Advisor is basically a tool that helps you figure out if additional indexes are helpful as well as partitioning. Here is a summary of the options:
• Adding indexes (clustered, non-clustered, and indexed views)
• Adding partitioning
• Adding statistics including multi-column statistics which are not created automatically even when you have the
AUTO_CREATE_STATISTICS database option set to ON
Log buffer is a small contiguous memory structure, only 60k maximum, to host in memory transaction logs per database. When
the transaction is committed, entire buffer will be written to transaction log file before commit finishes.
Exclusive locks (X): Used for operations that modifies data. INSERT, UPDATE, and DELETE require exclusive locks. No more
than one transaction can have an exclusive lock on a resource. If an exclusive lock exists on a resource, no other transaction
can access that resource.
SQL Server Configuration Manager is a tool to manage the services associated with SQL Server, to configure the network protocols used by SQL Server, and to manage the network connectivity configuration from SQL Server client computers.
SQLServerManager10.msc
Automatic page repair is supported by database mirroring. After certain types of errors corrupt a page, making it unreadable, a
database mirroring partner (principal or mirror) attempts to automatically recover the page. The partner/replica that cannot
read the page requests a fresh copy of the page from its partner. If this request succeeds, the unreadable page is replaced by
the readable copy, and this usually resolves the error, this whole process is called Automatic Page Repair.
Establishes a locking hierarchy. For example, if a transaction has an exclusive lock on a row, SQL Server places an intent lock on
the table. When another transaction requests a lock on a row in the table, SQL Server knows to check the rows to see if they
have locks. If a table does not have intent lock, it can issue the requested lock without checking each row for a lock.
Reporting Services Configuration Manager is used to configure a Reporting Services Native Mode installation. If we installed a
report server by using the files-only installation option, we must use this tool to configure the server before we can use it. If we
installed a report server by using the default configuration installation option, we can use this tool to verify or modify the settings
that were specified during setup. Reporting Services Configuration Manager can be used to configure a local or remote report
server instance.
Automatic page repair cannot repair the following control page types:
• File header page (page ID 0).
• Page 9 (the database boot page).
• Allocation pages: Global Allocation Map (GAM) pages, Shared Global Allocation Map (SGAM) pages, and Page Free
Space (PFS) pages.
Used to prevent a table or index that is being used by another session from being dropped or its schema being modified.
When a resource is locked with a schema lock, the object cannot be accessed.
SQL Server provides Resource Governor, a feature than we can use to manage SQL Server workload and system resource
consumption. Resource Governor enables us to specify limits on the amount of CPU and memory that incoming application
requests can use.Resource Governor enables us to manage SQL Server workloads and resources by specifying limits on resource
consumption by incoming requests.
Clustering can be best described as a technology that automatically allows one physical server to take over the tasks and
responsibilities of another physical server that has failed. The obvious goal behind this, given that all computer hardware and
software will eventually fail, is to ensure that users running mission-critical applications will have very less downtime when
such a failure occurs.
Bulk update locks (BU): Used to prevent other processes from accessing a table while a bulk load procedure is being
processed. It will, however, allow processing of concurrent bulk load processes, which allows you to run parallel loads. A bulk
load procedure is one performed by using bulk copy program (bcp) or BULK INSERT.
Microsoft SQL Server Replication Monitor is a graphical tool that allows us to monitor the overall health of a replication topology. Replication Monitor provides detailed information on the status and performance of publications and subscriptions, allowing us to answer common questions, such as:
• Is my replication system healthy?
• Which subscriptions are slow?
• How far behind is my transactional subscription?
• How long will it take a transaction committed now to reach a Subscriber in transactional replication?
• Why is my merge subscription slow?
• Why is an agent not running?
A cluster node is a server within the cluster, and it has Windows Server and the Cluster service installed.
Locks taken in serializable isolation level for locking ranges of data. These kind of locks can be taken only on keys.
Database administrators use the AlwaysOn Dashboard to obtains an at-a-glance view the health of an AlwaysOn availability group and its availability replicas and databases in SQL Server 2012. Some of the typical uses for the AlwaysOn Dashboard are:
• Choosing a replica for a manual failover.
• Estimating data loss if you force failover.
• Evaluating data-synchronization performance.
• Evaluating the performance impact of a synchronous-commit secondary replica
The cluster service manages all the activity that is specific to the cluster. One instance of the cluster service runs on each node in the cluster. The cluster service does the following
• Manages Cluster Objects and Configurations
• Manages the local restart policy
• Coordinates with other instances of the cluster service in the cluster
• Handles event notification
• Facilitates communication among other software components
• Performs failover operations
Lock escalation is the process of converting many fine-grain locks into fewer coarse-grain locks, reducing system overhead while increasing the probability of concurrency contention.
Applies to: SQL Server 2008 R2 and higher versions.
SQL Server Upgrade Advisor helps you prepare for upgrades to SQL Server 2012. Upgrade Advisor analyzes installed
components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after
you upgrade
A resource is a physical or logical entity, which has below properties:
• Can be brought online and taken offline
• Can be managed in the failover cluster
• Can be owned by only one node at a time
To manage resources, Cluster service communicates with a resource DLL through Resource Monitor.
Lock escalation is triggered when lock escalation is not disabled on the table by using the ALTER TABLE SET LOCK_ESCALATION option, and when either of the following conditions exists:
SQL Server Upgrade Advisor helps you prepare for upgrades to SQL Server 2012. Upgrade Advisor analyzes installed
components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after
you upgrade.
All resources can have following states
• Offline
• Offline_Pending
• Online
• Online_Pending
• Failed
Yes, It is possible to disable lock escalation.
sqlcmd utility in SQL Server is a command-line tool that lets us submit T-SQL statements or batches to local and remote instances
of SQL Server. The utility is extremely useful for repetitive database tasks such as batch processing or unit testing. It also provides
an easy way of simulating load to a database under development.
With SQL Server 2008 you can also specify escalation rules on the table level with ALTER TABLE SET LOCK_ESCALATION statement. There are 3 available modes:
osql utility allows us to enter Transact-SQL statements, system procedures, and script files. This utility uses ODBC to
communicate with the server.
The least restrictive type of lock is a shared lock.
bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility
can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files. Except
when used with the queryout option, the utility requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for that table or understand the structure of the table and the types of data that are valid for its
columns.
Conceptually, a cluster group is a collection of logically grouped cluster resources. It may contain cluster-aware application
services, such as SQL Server Group, File Server.
By importing the SQLPS module into a Windows PowerShell 2.0 environment, we can work with SQL Server from Powersehll. The module loads and registers the SQL Server snap-ins and manageability assemblies.
After importing the sqlps module into Windows PowerShell, we can then:
• Interactively run Windows PowerShell commands.
• Run Windows PowerShell script files.
• Run SQL Server cmdlets.
• Use the SQL Server provider paths to navigate through the hierarchy of SQL Server objects.
• Use the SQL Server manageability object models (such as Microsoft.SqlServer.Management.Smo) to manage SQL
Server objects.
A public network (also called as External network) provides client systems with access to cluster application services and IP
address resources are created on networks that provide clients access to cluster services.
Microsoft SQL Server Database Engine uses a dynamic locking strategy to determine the most cost-effective locks. The Database
Engine automatically determines what locks are most appropriate when the query is executed, based on the characteristics of
the schema and query. For example, to reduce the overhead of locking, the optimizer may choose page-level locks in an index
when performing an index scan.
Data Quality Client application enables you to perform data quality operations using a standalone tool. This application
enables you to create knowledge bases, create and run data quality projects, and perform administrative tasks.
Dynamic locking has the following advantages:
Extended Events has a highly scalable and highly configurable architecture that allows users to collect as much or as little
information as is necessary to troubleshoot or identify a performance problem.
A private network (sometimes called as interconnect or heartbeat connect) is a network that is setup between the nodes of
the cluster and it carries only internal cluster communications.
A latch can be defined as an object that ensures data integrity on other objects in SQL Server memory, particularly pages. They
are a logical construct that ensures controlled access to a resource and isolationism when required for pages in use. In contrast
to locks, latches are an internal SQL Server mechanism that isn’t exposed outside the SQLOS.
SQLdiag utility is a general purpose diagnostics collection utility that can be run as a console application or as a service. You can use SQLdiag to collect logs and data files from SQL Server and other types of servers, and use it to monitor your servers over time or troubleshoot specific problems with your servers. SQLdiag is intended to expedite and simplify diagnostic information gathering for Microsoft Customer Support Services.
SQLdiag can collect the following types of diagnostic information:
• Windows performance logs
• Windows event logs
• SQL Server Profiler traces
• SQL Server blocking information
• SQL Server configuration information
A private network (sometimes called as interconnect or heartbeat connect) is a network that is setup between the nodes of
the cluster and it carries only internal cluster communications.
Roughly we can classify latches in 2 categories.
PSSDIAG is a general purpose diagnostic collection utility that Microsoft Product Support Services uses to collect various logs
and data files. PSSDIAG can natively collect Performance Monitor logs, SQL Profiler traces, SQL Server blocking script output,
Windows Event Logs, and SQLDIAG output. The data collection can be customized by enabling or disabling any of these log types,
by changing the sample interval of the blocking script and the Performance Monitor logs, and by modifying the specific events
and counters for SQL Profiler and Performance Monitor to capture.
Heartbeats are messages that Cluster Service regularly sends between the instances of Cluster Service that are on each node
to manage the cluster.
The buffer latches are used to synchronize access to BUF structures and their associated database pages. The typical buffer
latching occurs during operations that require serialization on a buffer page, (during a page split or during the allocation of a
new page, for example). These latches are not held for the duration of a transaction. These are indicated in the sysprocesses
table by the PAGELATCH waittypes.
Database Maintenance Plans allow us to automate many database administration tasks in Microsoft SQL Server. We can create maintenance plans using an easy wizard-based process without any knowledge of Transact-SQL.
We may perform the following tasks within a database maintenance plan:
• Shrinking a database
• Backing up a database
• Performing an operator notification
• Updating database statistics
• Verifying the integrity of a database
• Cleaning up leftover maintenance files
• Executing a SQL Server Agent job
• Executing a Transact-SQL statement
• Rebuilding an index
• Reorganizing an index
• Cleaning up database histories
Failover: Failover is the process of moving a group of resources from one node to another in the case of a failure. For example, in a cluster where Microsoft SQL Server is running on node A and node A fails, SQL Server automatically fails over to node B of the cluster.
Failback: Failback is the process of returning a resource or group of resources to the node on which it was running before it
failed over. For example, when node A comes back online, SQL Server can fail back from node B to node A.
The non-buffer latches provide synchronization services to in-memory data structures or provide re-entrancy protection for
concurrency-sensitive code lines. These latches can be used for a variety of things, but they are not used to synchronize access
to buffer pages.
SQL Nexus is a tool that helps you identify the root cause of SQL Server performance issues. It loads and analyzes performance
data collected by SQLDiag and PSSDiag. It can dramatically reduce the amount of time you spend manually analyzing data.
This is a logical drive assigned on the shared disk array specifically for Windows Clustering. Clustering services write constantly
on this drive about the state of the cluster. Corruption or failure of this drive can fail the entire cluster setup. It also acts as a
voter in the fail over process in case of odd number of nodes.
Pessimistic locking locks resources as they are acquired, holding the locks for the duration of the transaction. Pessimistic locking is more likely to cause deadlocks. A deadlock occurs with two transactions when each blocks access to resources needed by the other.
Optimistic locking assumes that conflicts between transactions are unlikely but might occur. Transactions are allowed to
execute without locking resources. The only time resources are checked for a conflict is when data changes are made. If a conflict
occurs, the transaction is rolled back.
The tablediff utility is used to compare the data in two tables for non-convergence, and is particularly useful for troubleshooting non-convergence in a replication topology. This utility can be used from the command prompt or in a batch file to perform the following tasks:
• A row by row comparison between a source table in an instance of Microsoft SQL Server acting as a replication Publisher and the destination table at one or more instances of SQL Server acting as replication Subscribers.
• Perform a fast comparison by only comparing row counts and schema.
• Perform column-level comparisons.
• Generate a Transact-SQL script to fix discrepancies at the destination server to bring the source and destination tables into convergence.
• Log results to an output file or into a table in the destination database.
Microsoft introduced row versioning as an alternative to shared locks in SQL Server 2005. With row versioning, rows are read into tempdb at the beginning of a transaction and the transaction uses that copy of those rows throughout the duration of the transaction. Row versioning protects the transaction from
Even when row versioning is used, SQL Server still takes an exclusive lock on a row before updating it.
Row versioning allows for optimum concurrency (multiple users accessing data at the same time), while providing good
protection.
Replication Distribution Agent (Distrib.exe) is an executable that moves the snapshot (for snapshot replication and
transactional replication) and the transactions held in the distribution database tables (for transactional replication) to the
destination tables at the Subscribers.
This type of quorum is optimal for clusters having an odd number of nodes. In this configuration, only the nodes have votes. The
shared storage does not have a vote. A majority of votes are needed to operate the cluster.
SQL Server uses LRU (Least Recently Used) algorithm to keep track of pages that are aged in the buffer pool. It increments a counter every time a page is referenced and decrements the counter every time the lazy writer process sweeps the page. Any worker thread will check the memory status of Buffer Pool to make sure that there are healthy numbers of free pages to honor incoming new requests.
Note that: SQL Server will always keep a minimum number of free pages on the free list, so that it can serve incoming requests
without any significant delay. The amount of free space is calculated based on Buffer Pool and amount of incoming requests.
Replication Distribution Agent (Distrib.exe) is an executable that moves the snapshot (for snapshot replication and
transactional replication) and the transactions held in the distribution database tables (for transactional replication) to the
destination tables at the Subscribers.
Nodes and a shared disk get votes. This configuration allows a loss of half the nodes, providing the disk witness is available, or
over half the nodes are available without the disk witness being available. This is recommended for even number of nodes in
the cluster
“Halloween protection” in database systems refers to a solution for a problem that can occur in update queries. The
problem occurs when an update itself affects the rows selected for update. For example, imagine a company wants to give
every employee a 10% pay increase. If the update query is unlucky enough to walk the salary index then the same row may be
selected, updated, the update moves the row ahead in the index, and then the row will be redundantly selected again and
again. This problem is corrected by isolating the rows chosen from the effects of the update itself.
Replication Snapshot Agent is an executable file that prepares snapshot files containing schema and data of published tables
and database objects, stores the files in the snapshot folder, and records synchronization jobs in the distribution database.
This type of quorum is optimal for clusters having an even number of nodes when a shared witness disk is not an option. Other characteristics include the following:
dtexec command prompt utility is used to configure and execute SQL Server Integration Services packages.
The disk witness must be available to have quorum, but the number of available nodes doesn’t matter. If you have a four-node
cluster and only one node is available, but the disk witness is available, you have quorum. If the disk witness isn’t available, then
even if all four nodes are available you can’t have quorum
SELECT SUM(current_workers_count) as [Current worker thread] FROM sys.dm_os_schedulers
The dtutil command prompt utility is used to manage SQL Server Integration Services packages. The utility can copy, move,
delete, or verify the existence of a package.
Cluster nodes communicate with each other over the network (port 3343). When nodes are unable to communicate with each
other, they all assume the resources of the other (unreachable) nodes have to be brought online. Because the same resource
will be brought online on multiple nodes at the same time, data corruption may occur. These results in a situation called “Split
Brain.”
Statistics refers to the statistical information about the distribution of values in one or more columns of a table or an index. The
SQL Server Query Optimizer uses this statistical information to estimate the cardinality, or number of rows, in the query result
to be returned, which enables the SQL Server Query Optimizer to create a high-quality query execution plan. For example, based
on these statistical information SQL Server Query Optimizer might decide whether to use the index seek operator or a more
resource-intensive index scan operator in order to provide optimal query performance. In this article series, I am going to talk
about statistics in detail.
Data profiles can be Viewed and analyzed in the data profiling process using Data Profile Viewer. We can view these profiles
after we have run the Data Profiling task inside an Integration Services package and computed the data profiles.
To prevent Split Brains we need to bring the cluster resource online on a single node (rather than multiple nodes). Each of the
online node cast vote for majority and the resources come online on that group which has more votes or has majority. In case
of Even number of nodes Quorum also acts as a voter to eliminate split brain situation.
Index statistics: Created automatically when an index (both clustered and non-clustered) is created. These will have the same name as the index and will exist as long as the index exists.
Column statistics: Created manually by the DBA using the ‘CREATE STATISTICS’ command, or automatically if the “Auto Create
Statistics” option is set to “True”. Column statistics can be created, modified and dropped at will.
Integration Services Project Conversion Wizard converts a complete project to the project deployment model.
Windows Cluster
Statistics contain two different types of information about the data; density and distribution. Density is simply the inverse of the
count of distinct values for the column or columns. The distribution is a representation of the data contained in the first column
of the statistic. This information is stored in a histogram; the histogram contains up to 200 steps with a lower and upper limit
and contains the count of values that fall between both limits.
Policy Management tool is used to create policies to manage entities on the server, such as the instance of SQL Server,
databases, or other SQL Server objects. Policy-based management in SQL Server assists the database administrators in defining
and enforcing policies that tie to database objects and instances. These policies allow the administrator to configure and manage
SQL server across the enterprise.
This database property allows SQL Server to automatically create stats for a single non-indexed column if they are missing when
it is used in a where or join condition. This ensures the optimizer will have the necessary information to choose a plan. The
statistics automatically created by SQL Server will start with _WA_ in their name.
Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of w3Sniff.
Get Started