If you want to work in MS SQL Server field. Going over the MS SQL Server technical interview questions, on the other hand, isn't exactly a bed of roses. The following list of MS SQL Server interview questions and answers can assist you whether you are a candidate seeking a job or a recruiter looking for the best MS SQL Server. You can use it to model other questions based on this pattern or answer questions that are similar to this one.
Master Data Services Configuration Manager is the tool that we use to create or configure a Master Data Services database.
After we create a database, we can use Master Data Services Configuration Manager to create a web application and to enable
integration with Data Quality Services. Finally, we can use Master Data Services Configuration Manager to associate an MDS
database with an MDS web application.
This database property allows SQL Server to automatically update the statistics when they are deemed outdated. The update
occurs before executing a query if certain conditions are met, or after the query is executed if Auto Update Statistics
Asynchronously is used instead.
Installation-related logs are stored under the shared feature directory folder which was selected at the time of the first SQL Server instance installation.
e.g. If Shared Features were selected to be placed on the “C:\Program Files\Microsoft SQL Server” then logs will be created under
%programfiles%\Microsoft SQL Server\110\Setup Bootstrap\Log\<YYYYMMDD_HHMM>\
C:\Program Files\Microsoft SQL Server\100\Setup BootStrap\Release
Before SQL Server 2008 if there was a requirement to rebuild the master databases then its required SQL Server complete Setup.
But starting SQL Server 2008 we don’t need SQL server setup anymore, system databases can be rebuild with the setup.exe
located in the above location.
The three conditions that will trigger an update if one is met are:
SQL Server Setup generates a configuration file named ConfigurationFile.ini, based upon the system default and run-time
inputs. The ConfigurationFile.ini file is a text file that contains the set of parameters in name/value pairs along with
descriptive comments. Many of the parameter names correspond to the screens and options which you see while installing
SQL Server through the wizard. We can then use the configuration file to install SQL Server with the same configuration
instead of going through each of the installation screens.
When we install Microsoft SQL Server, five system databases are automatically created for each SQL Server instance. These system databases allow the database engine and administrative applications to properly manage the system:
• master
• model
• msdb
• tempdb
• Resource (SQL Server 2005 and higher only)
Starting with SQL Server 2012, SQL Server Query Optimizer creates and maintains temporary statistics in tempdb database for
the read-only database or read-only snapshot database or readable secondaries in the AlwaysOn cluster in order to ensure your
queries perform better. These temporary statistics are created and maintained by SQL Server Query Optimizer only; though you
can delete them when not needed. These statistics are suffixed with “_readonly_database_statistic” to differentiate it from the
regular statistics.
We can find the configuration file in the C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log folder. There will be a
subfolder based on a timestamp of when the SQL Server 2012 installation was done.
The Master database is the heart and soul of SQL Server. It basically records all the system level information. Every instance of SQL Server will have an independent Master database; as it captures instance level configuration information. The information which is captured in the Master database includes SQL Server instance level configurations, linked server configurations, SQL Server Logins, Service Broker Endpoints, System Level Stored Procedures, and System level Functions etc. The system and user databases related information such as name and location for user and system database are captured in Master database.
If master database is corrupted or if it is not available then the SQL Server Service will not start. In SQL Server 2005 and later
versions the system objects are stored in Resource Database rather than in Master Database. The Master database is created
using Simple Recovery Model.
This is the SQL Server Instance name that all client applications will use to connect to the SQL Server.
Since SQL Server optimizer use the statistics to create execution plans for the SQL Server queries. It is very important to
update the statistics time to time to make sure that the SQL Server uses the best optimized plan to execute the SQL Server
queries.
Based on the selected components while doing the installation we will find the respective services for each component in the Windows Services. e.g. SQL Server, SQL Server Agent, SQL Analysis Services, SQL Server Integration Services, etc. There will be a user for each and every service through which each service will run. That use is called the Service Account of that service. Mainly we categorize the Service account as below:
Local User Account: This user account is created in the server where SQL Server is installed; this account does not have access to network resources.
Local Service Account: This is a built-in windows account that is available for configuring services in windows. This account has permissions as same as accounts that are in the user's group, thus it has limited access to the resources on the server.
Local System Account: This is a built-in windows account that is available for configuring services in windows. This is a highly privileged account that has access to all resources in the server with administrator rights.
Network Service Account: This is a built-in windows account that is available for configuring services in windows. This has permission to access resources in the network under the computer account.
Domain Account: This account is a part of our domain that has access to network resources for which it is intended to have
permission. It is always advised to run SQL Server and related services under a domain account with minimum privilege need
to run SQL Server and its related services.
The Model database is basically used as a template when creating databases in SQL Server. Basically SQL Server takes a copy of Model database whenever a user tries to create a new database in SQL Server. This also means that if a user creates any tables, stored procedures, user defined data types or user defined functions within a Model database; then those objects will be available in every newly created database on that particular instance of SQL Server.
If the Model database is damaged or corrupted then SQL Server Service will not start up as it will not be able to create the
tempdb database.
We can use stats_date() function to check the statistics updated date.
No, it is not required. It’s not mandatory to grant Administrator permissions to the service account.
SQL Server Agent uses MSDB database to store information related to the configuration of SQL Server Agent Jobs, Job schedules, Alerts, Operators etc. MSDB also stores information related to configuration of Service Broker, Log Shipping, database backups and restore information, Maintenance Plan Configuration, Configuration of Database Mail, Policy Bases Information of SQL Server 2008 etc.
If the MSDB database is corrupted or damaged then scheduling information used by SQL Server Agent will be lost. This will
result in the failure of all scheduled activities.
An Active – Passive cluster is a failover cluster configured in a way that only one cluster node is active at any given time. The other node, called as Passive node is always online but in an idle condition, waiting for a failure of the Active Node, upon which the Passive Node takes over the SQL Server Services and this becomes the Active Node, the previous Active Node now being a Passive Node.
An Active – Active cluster is a failover cluster configured in a way that both the cluster nodes are active at any given point of
time. That is, one Instance of SQL Server is running on each of the nodes always; when one of the nodes has a failure, both the
Instances run on the only one node until the failed node is brought up (after fixing the issue that caused the node failure). The
instance is then failed over back to its designated node.
To update statistics for a particular table we can use below command.
UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN
The user through which we are installing SQL Server must have administrator permissions on the Windows server.
If you are running the setup as a local administrator, you require the following user rights for the setup to run successfully:
Local
Group Policy Object display name |
User
right |
Backup
files and directories |
SeBackupPrivilege |
Debug
Programs |
SeDebugPrivilege |
Manage
auditing and security log |
SeSecurityPrivilege |
The tempdb database is considered a global resource that all connections and all users can access. The tempdb database holds
user-created temporary database objects, such as temporary tables and temporary stored procedures. This database is alsoused heavily by the database engine to serialize intermediate results and to hold other temporary data and objects that are
generated during processing. Tempdb is always recreated from when the SQL Server service is first started. Because tempdb is
so heavily used by the system, many system-wide performance optimizations are necessarily focused on making tempdb as
efficient as possible.
In sql2005 we have the option of installing SQL in remaining nodes from the primary node, But in sql2008 we need to go
separately (Login to the both nodes) for installing SQL cluster
(SOS Scheduler)– The object that manages thread scheduling in SQL Server and allows threads to be exposed to the CPU
(described in sys.dm_os_schedulers). It does not control things but lets the workers work with each other and relies on their
cooperation (co-operative scheduling mode). Each scheduler /master (one per logical CPU) accepts new tasks and hands them
off to workers. SOS Scheduler allows one worker at a time to be exposed to the CPU.
This directory contains the common files used by all instances on a single computer e.g. SSMS, sqlcmd, BCP, DTExec, etc. These
are installed in the folder
The Resource database is a read only, hidden system database that contains all the SQL Server system objects such as sys.objects which are physically available only in the Resource database, even though they logically appear in the SYS schema of every database. The Resource Database does not contain any user data or any user metadata. By design, the Resource database is not visible under SQL Server Management Studio’s Object Explorer | Databases | System Databases Node.
The DBA shouldn’t rename or move the Resource Database file. If the files are renamed or moved from their respective
locations then SQL Server will not start. The other important thing to be considered is not to put the Resource Database files in
a compressed or encrypted NTFS file system folders as it will hinder the performance and will also possibly prevent upgrades.
Yes, we can change the Quorum setting after the Windows Cluster installation.
Scheduler
A schedule has three important component, Processor,Waiter List and Runnable queue.
A Processor is the actual processor which processes the thread one at a time.
Waiter List is the list of threads waiting for resources.
Runnable Queue is a queue of threads which has all the resource it need to process but waiting for its turn to get into the processor.
Scheduler put the thread in to Runnable Queue then move the thread to Processor and migrate to Waiter List. Each thread
keeps going through these three components until the thread works completes
An instance of the Database Engine is a copy of the sqlservr.exe executable that runs as an operating system service. Each
instance manages its own system databases and one or more user databases. An instance is a complete copy of an SQL Server
installation.
The physical file names of the Resource database are mssqlsystemresource.mdf and mssqlsystemresource.ldf. These files are
located in
No it’s not mandatory to configure MSDTC service to install SQL Server in Windows 2008 cluster. Installation will give you a
warning but will not stop the installation.
Preemptive Scheduling is when a computer process is interrupted and the CPU’s power is given over to another process with a higher priority. This type of scheduling occurs when a process switches from running state to a ready state or from a waiting state to a ready state.
Non-Preemptive Scheduling
Non-Preemptive Scheduling allows the process to run through to completion before moving onto the next task.
There are two types of Instances.
Each computer can run a maximum of 50 instances of the Database Engine. One instance can be the default instance.
The default instance has no name. If a connection request specifies only the name of the computer, the connection is made to the default instance.
A named instance is one where you specify an instance name when installing the instance.
A connection request must specify
both the computer name and instance name in order to connect to the instance.
There are three different types of objects stored in tempdb.
• Internal Objects:
1. Intermediate runs for sort.
2. Intermediate results for hash join and hash aggregates.
3. XML variables or other large object (LOB) data type variables. (text, image, ntext, varchar(max), varbinary(max))
4. Queries that need a spool to store intermediate results.
5. Keyset cursors to store the keys.
6. Static cursors to store a query result.
7. Service Broker to store messages in transit.
8. INSTEAD OF triggers to store data for internal processing.
9. DBCC CHECK internally uses a query that may need to spool intermediate results.
10. Query notification and event notification use Service Broker.
• Version Store:
1. Snapshot Isolation / Read Committed Snapshot Islotaion
2. Triggers (After Triggers). Instead of triggers doesn’t generate versions.
3. MARS (Multiple Active Result Sets)
4. Index Rebuilds
• User Objects:
1. User defined tables and indexes
2. Local and global temporary tables, bulk insert and BCP intermediate results
3. Index rebuilds with “SORT IN TEMPDB” option.
SQLIO is a generic tool for testing your disk subsystem. It allows you to specify whether to test random/sequential, data block sizes, queue depth, threading, etc. Despite its name, SQLIO is not part of SQL Server, yet it can be an invaluable tool for determining whether the disk drives slated to support SQL Server can meet the system’s I/O needs.
The tool lets you push the limits of your disk subsystem in order to determine the hardware’s I/O capacity and to identify performance-related issues before deploying SQL Server.
You can use the tool to test a variety of I/O types and sizes against various disk subsystems, including direct attached storage,
network attached storage, and storage area networks.
Service Pack is abbreviated as SP, a service pack is a collection of updates and fixes, called patches, for an operating system or a software program. Many of these patches are often released before the larger service pack, but the service pack allows for an easy, single installation.
Patch – Publicly released update to fix a known bug/issue
Hotfix – update to fix a very specific issue, not always publicly released
The MSDB database is the database with the backup and restores system tables. Here are the backup and restore system tables and their purpose:
backupfile – contains one row for each data file or log file backed up
backupmediafamily – contains one row for each media family
backupmediaset – contains one row for each backup media set
backupset – contains one row for each backup set
restorefile – contains one row for each restored file
restorefilegroup – contains one row for each restored filegroup
restorehistory – contains one row for each restore operation
SQLIOSim is a tool that tries to emulate the patterns that SQL Server would expose to your system. The sqliosim utility simulates SQL Server read, write, sort, backup, checkpoint and read-ahead patterns for both user and system activity.The tool creates sample data and log files in order to perform SQL Server reliability and integrity tests on the disk subsystem.
Because the utility runs independently of the SQL Server engine, you can use it before installing SQL Server in order to ensure that new hardware can handle your expected loads
SQL Server 2008 introduced a concept that’s called “Slipstream Installation”. This is a way to deploy a SQL Server instance with
all the needed Service pack as part of the installation. Everything will be installed in one go, hence there is no need to deploy
any other service packs on the installation.
ReportServer and ReportServerTempdb
New validation feature. With this feature, you can check that your system, storage, and network configuration is suitable for a cluster.
Support for GUID partition table (GPT) disks in cluster storage. GPT disks can have partitions larger than two terabytes and have built-in redundancy in the way partition information is stored, unlike master boot record (MBR) disks.
There are some key enterprise edition only performance benefits across RAM, Parallelism, Query Plans and DISK I/O that will lead to better performance on high end systems.
1. Lock Pages in Memory
2. Advanced Scanning (aka Merry-go-round scan)
3. Larger Read Ahead Buffering on Storage Devices
4. Large Page Extensions
5. Parallel index Operations
Step 1. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools >SQL Server Configuration Manager
Step 2. Go to SQL Server Configuration Manager > SQL Server Network Configuration >Protocols for <Instance Name>
Yes, It is very important to take system database backups except tempdb.
Cluster continues to work but failover will not happen in case of any other failure in the active node.
n SQL Server Enterprise Edition, the advanced scan feature allows multiple tasks to share full table scans. If the execution plan
of a Transact-SQL statement requires a scan of the data pages in a table and the Database Engine detects that the table is already
being scanned for another execution plan, the Database Engine joins the second scan to the first, at the current location of the
second scan. The Database Engine reads each page one time and passes the rows from each page to both execution plans. This
continues until the end of the table is reached.
Below are the methods using which we can get the port information.
Method 1: SQL Server Configuration Manager
Method 2: Windows Event Viewer
Method 3: SQL Server Error Logs
Method 4: sys.dm_exec_connections DMV
Method 5: Reading registry using xp_instance_regread
No, it’s not possible to take tempdb backup and it’s not required.
In Windows Server 2003, the Quorum disk resource is required for the Cluster to function. In your example, if the Quorum disk
suddenly became unavailable to the cluster then both nodes would immediately fail and not be able to restart the cluster
service.
A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of
additional writes and storage space to maintain the index data structure. Indexes are used to quickly locate data without
having to search every row in a database table every time a database table is accessed. Indexes can be created using one or
more columns of a database table.
FILESTREAM was introduced in SQL Server 2008 for the storage and management of unstructured data. The FILESTREAM feature
allows storing BLOB data (example: word documents, image files, music, and videos, etc) in the NT file system and ensures
transactional consistency between the unstructured data stored in the NT file system and the structured data stored in the
table.
No, it’s not possible to rename any system database.
Groups that contain an IP address resource and a network name resource (along with other resources) are published to clients
on the network under a unique server name. Because these groups appear as individual servers to clients, they are called virtual
servers. Users access applications or services on a virtual server the same way they access applications or services on a physical
server. They do not need to know that they are connecting to a cluster and have no knowledge of which node they are connected
to.
SQL Server error logs are stored in the below location.
Instance Root Directory\MSSQL\Log
Default owner of system databases is sa, We can’t change the default owner of master, model, tempdb and distributor
databases.
In the Cluster Administrator, rick click on the SQL Server Group and from the popup menu item choose Take Offline.
By default, there are seven SQL Server error logs; Errorlog and Errorlog.1 through Errorlog.6. The name of the current, most
recent log is Errorlog with no extension. The log is re-created every time that you restart SQL Server. When the Errorlog file is
re-created, the previous log is renamed to Errorlog.1, and the next previous log (Errorlog.1) is renamed to Errorlog.2, and so on.
Errorlog.6 is deleted.
• Preallocate space for all tempdb files by setting the file size to a value large enough to accommodate the typical workload in the environment. This prevents tempdb from expanding too frequently, which can affect performance. The tempdb database should be set to autogrow, but this should be used to increase disk space for unplanned exceptions.
• Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability. However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary. Note that a dual-core CPU is considered to be two CPUs.
• Make each data file the same size; this allows for optimal proportional-fill performance.
• Put the tempdb database on a fast I/O subsystem. Use disk striping if there are many directly attached disks.
• Put the tempdb database on disks that differ from those that are used by user databases.
After adding the shared disk in the storage, we can add disk to the respective SQL Server Group.
A non-clustered index can be defined on a table or view with a clustered index or on a heap. Each index row in the non-clustered index contains the non-clustered key value and a row locator. This locator points to the data row in the clustered
index or heap having the key value. The rows in the index are stored in the order of the index key values, but the data rows are
not guaranteed to be in any particular order unless a clustered index is created on the table.
Yes, it is possible to change the no. of Error logs retention.
We can follow the below steps to change the Error log file retention.
SQL Server backup and restore operations occur within the context of the recovery model of the database. Recovery models
are designed to control transaction log maintenance. A recovery model is a database property that controls how transactions
are logged, whether the transaction log requires (and allows) backing up, and what kinds of restore operations are available.
Maximum 16.
A unique index ensures that the index key contains no duplicate values and therefore every row in the table or view is in some
way unique. Uniqueness can be a property of both clustered and non-clustered indexes.
Securable are the resources to which the SQL Server Database Engine authorization system regulates access. For example, a
table is a securable. Some securable can be contained within others, creating nested hierarchies called “scopes” that can
themselves be secured. The securable scopes are server, database, and schema.
Three recovery models exist: simple, full, and bulk-logged. Typically, a database uses the full recovery model or simple
recovery model. A database can be switched to another recovery model at any time.
A failover cluster is a group of independent computers that work together to increase the availability of applications and services.
The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another
node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.
A column-store index stores data in a column-wise (columnar) format, unlike the traditional B-tree structures used for
clustered and non-clustered row store indexes, which store data row-wise (in rows). A column-store index organizes the data
in individual columns that are joined together to form the index. This structure can offer significant performance gains for
queries that summarize large quantities of data, the sort typically used for business intelligence (BI) and data warehousing.
Securable scope: Server –The server securable scope contains the following securable:
Securable scope: Database –The database securable scope contains the following securable:
Securable scope: Schema –The schema securable scope contains the following securable:
We can setup the recovery model of model database as simple to make sure that newly created database’s recovery is
automatically configured as simple.
Services and applications are managed as single units for configuration and recovery purposes. If a resource depends on another
resource, both resources must be a member of the same service or application. For example, in a file share resource, the service
or application containing the file share must also contain the disk resource and network resources (such as the IP address and
NetBIOS name) to which clients connect to access the share. All resources within a service or application must be online on the
same node in the cluster.
A non-clustered index that is extended to include non-key columns in addition to the key columns.
SQL Server 2005 introduced the concept of database schemas and the separation between database objects and ownership by
users. An object owned by a database user is no longer tied to that user. The object now belongs to a schema – a container
that can hold many database objects. Schema is a collection of database objects that are owned by a single principle and form
a single namespace
A Database backup is a copy of SQL Server data that can be used to restore and recover the data in case of any failure. A backup
of SQL Server data is created at the level of a database or one or more of its files or filegroups. There is another option to take
Transaction Log backups when the database recovery model of a database is Full.
Service account needs create object permissions in the Active Directory.
An optimized non-clustered index, especially suited to cover queries that select from a well-defined subset of data. It uses a
filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce
index maintenance costs, and reduce index storage costs compared with full-table indexes.
Bulk Admin: Members of this role can perform Bulk Insert operations on all the databases.
DB Creator: Members of this role can Create/Alter/Drop/Restore a database.
Disk Admin: Members can manage disk files for the server and all databases. They can handle backup devices.
Process Admin: Members of this role can manage and terminate the processes on the SQL Server.
Server Admin: Members of this role can change Server-wide configurations and shut down SQL Server instances.
Setup Admin: Members of this role can Add/Remove Linked Servers.
Security Admin: Members of this role can create/manage Logins, including changing and resetting passwords as needed, and managing GRANT, REVOKE and DENY permissions at the server and database levels.
SysAdmin: Members of this role have Full Control on the instance and can perform any task.
Public: Public is another role just like Fixed Server Roles, that is by default granted to every login (Windows/SQL)
We have below type of backup available in SQL Server 2012.
• Full Backup
• Differential Backup
• Transaction Log Backup
• Copy-Only Backup
• File or Filegroup Backup
SQL Services should always be in manual mode in case of cluster because these are managed by the Cluster service and it’s
taken online on its respective owner node based on the failover.
A spatial index provides the ability to perform certain operations more efficiently on spatial objects (spatial data) in a column
of the geometry data type. The spatial index reduces the number of objects on which relatively costly spatial operations need
to be applied.
Dynamic management views and functions return server state information that can be used to monitor the health of a server
instance, diagnose problems, and tune performance.
There are two types of dynamic management views and functions:
Server-scoped dynamic management views and functions. These require VIEW SERVER STATE permission on the server.
Database-scoped dynamic management views and functions. These require VIEW DATABASE STATE permission on the
database.
WITH password option is not available any more with SQL Server 2012 onwards.
Distributed lock management (DLM): Distributed lock management (DLM) enables two servers to access the same physical disk
at the same time without corrupting the data. If a device is updating a particular file or piece of data, the device gets locked so
that another controller can’t seize ownership and overwrite the data. NT does not currently support DLM, so disks are dedicated
to one node or the other.
A shredded, and persisted, representation of the XML binary large objects (BLOBs) in the xml data type column.
The VIEW DEFINITION permission lets a user see the metadata of the securable on which the permission is granted. However, VIEW DEFINITION permission does not confer access to the securable itself. For example, a user that is granted only VIEW DEFINITION permission on a table can see metadata related to the table in the sys.objects catalog view. However, without additional permissions such as SELECT or CONTROL, the user cannot read data from the table.
The VIEW DEFINITION permission can be granted on the following levels:
Transaction Log backups are possible in Full and Bulk Logged recovery model.
LooksAlive: Verifies that the SQL Server service runs on the online node every 5 seconds by default.
A special type of token-based functional index that is built and maintained by the Microsoft Full-Text Engine for SQL Server. It
provides efficient support for sophisticated word searches in character string data.
Guest user permits access to a database for any logins that are not mapped to a specific database user. The guest user cannot be dropped but it can be disabled by revoking the CONNECT permission. The recommendation is not valid for master, msdb, and tempdb system databases. If the Guest user is disabled in msdb system database, it may cause some issues.
The distribution database is also a system database and more information about the Guest User in the distribution database can be
found below. It is recommended to disable a guest user in every database as a best practice for securing the SQL Server.
• Bulk import operations (bcp, BULK INSERT, and INSERT… SELECT). For more information about when bulk import into a table is minimally logged.
• SELECT INTO operations.
• Partial updates to large value data types, using the .WRITE clause in the UPDATE statement when inserting or appending new data. Note that minimal logging is not used when existing values are updated.
• WRITETEXT and UPDATETEXT statements when inserting or appending new data into the text, ntext, and image data type columns. Note that minimal logging is not used when existing values are updated.
• CREATE INDEX operations (including indexed views).
• ALTER INDEX REBUILD or DBCC DBREINDEX operations.
• DROP INDEX new heap rebuild (if applicable).
IsAlive: Verifies that SQL Server accepts connections by executing sp_server_diagnostics. This health detection logic
determines if a node is down and the passive node then takes over the production workload.
A clustered index determines the order in which the rows of the table will be stored on disk – and it actually stores row level data in the leaf nodes of the index itself. A non-clustered index has no effect on which the order of the rows will be stored.
Using a clustered index is an advantage when groups of data that can be clustered are frequently accessed by some queries. This speeds up retrieval because the data lives close to each other on disk. Also, if data is accessed in the same order as the clustered index, the retrieval will be much faster because the physical data stored on disk is sorted in the same order as the index.
A clustered index can be a disadvantage because any time a change is made to a value of an indexed column, the subsequent possibility of re-sorting rows to maintain order is a definite performance hit.
A table can have multiple non-clustered indexes. But, a table can have only one clustered index.
Non clustered indexes store both a value and a pointer to the actual row that holds that value. Clustered indexes don’t need
to store a pointer to the actual row because of the fact that the rows in the table are stored on disk in the same exact order as
the clustered index – and the clustered index actually stores the row-level data in its leaf nodes.
db_datareader: The db_datareader role has the ability to run a SELECT statement against any table or view in the database.
db_datawriter: The db_datawriter role has the ability to modify via INSERT, UPDATE, or DELETE data in any table or view in the database.
db_denydatareader: The db_denydatareader role is the exact opposite of the db_datareader role: instead of granting SELECT permissions on any database object, the db_denydatareader denies SELECT permissions.
db_denydatawriter: db_denydatawriter role serves to restrict permissions on a given database. With this role, the user is preventing from modifying the data on any data via an INSERT, UPDATE, or DELETE statement
We can use RESTORE VERIFY ONLY command to make sure that the Database backups are restorable.
Super key is a set of one or more than one keys that can be used to identify a record uniquely in a table. Example: Primary key,
Unique key, Alternate key are subset of Super Keys.
The deny roles are an extra safeguard that you can use to make sure that certain logins or groups will never have the type of access that is specifically denied under the role. With nesting of groups in Windows and multiple role assignments sometimes individuals inadvertently end up with excessive permissions. This is just another level that can be applied to lessen these accidental cracks in the permissions hierarchy.
msdb database contains information about the backup restore.
A Candidate Key is a set of one or more fields/columns that can identify a record uniquely in a table. There can be multiple
Candidate Keys in one table. Each Candidate Key can work as Primary Key.
A database user for which the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the instance. Such a user is said to be an orphaned user of the database on that server instance.
Here are the backup and restore system tables and their purpose:
• backupfile – contains one row for each data file or log file backed up
• backupmediafamily – contains one row for each media family
• backupmediaset – contains one row for each backup media set
• backupset – contains one row for each backup set
• restorefile – contains one row for each restored file
• restorefilegroup – contains one row for each restored filegroup
• restorehistory – contains one row for each restore operation
Validation test is a mechanism of verifying that all the components which are participating in the Windows cluster are fine and
failover is happening between the nodes.
An Alternate key is a key that can be work as a primary key. Basically it is a candidate key that currently is not primary key.
This will lists the orphaned users:
EXEC sp_change_users_login 'Report'
If you already have a login id and password for this user, fix it by doing:
EXEC sp_change_users_login 'Auto_Fix', 'user'
If you want to create a new login id and password for this user, fix it by doing:
EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password'
DCM page contains information about the extent which are changed after the Full backup. Diff. backup process reads
information about the changed extents and those extents are added in the differential backup.
Primary key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It cannot accept
null, duplicate values. Only one Candidate Key can be Primary Key.
To hide an instance of the SQL Server Database Engine
Backup with COPY ONLY option can be used in such a situation.
These reports are automatically stored for you in C:\Windows\Cluster\Reports as MHTML files.
Unique key is a set of one or more fields/columns of a table that uniquely identify a record in database table. It is like Primary
key but it can accept only one null value and it cannot have duplicate values.
We can verify the backup history of the database that backups are happening or not.
backupset table in msdb
Foreign Key is a field in database table that is Primary key in another table. It can accept multiple null, duplicate values.
Example: We can have a DeptID column in the Employee table which is pointing to DeptID column in a department table
where it a primary key.
Transparent Data Encryption (TDE) is a feature introduced in SQL Server 2008 and available in later versions for bulk encryption at the database file level (data file, log file and backup file) i.e. the entire database at rest. Once enabled for a database, this feature encrypts data into pages before it is written to the disk and decrypts when read from the disk. The best part of this feature is, as its name implies, it’s completely transparent to your application. This means literally no application code changes (only administrative change to enable it for a database) are required and hence no impact on the application code\functionalities when enabling TDE on a database being referenced by that application.
This column tells us about the backup type.
Backup type. Can be:
• D = Database
• I = Differential database
• L = Log
• F = File or filegroup
• G =Differential file
• P = Partial
• Q = Differential partial
Yes, definitely there will be downtime when SQL Server failover from one node to another.
By default Primary Key column is created with a Cluster Index.
Transparent Data Encryption (TDE) is a feature introduced in SQL Server 2008 and available in later versions for bulk encryption at the database file level (data file, log file and backup file) i.e. the entire database at rest. Once enabled for a database, this feature encrypts data into pages before it is written to the disk and decrypts when read from the disk. The best part of this feature is, as its name implies, it’s completely transparent to your application. This means literally no application code changes (only administrative change to enable it for a database) are required and hence no impact on the application code\functionalities when enabling TDE on a database being referenced by that application.
The user must be a member of either of the below roles
Backup:
• sysadmin – fixed server role
• db_owner – fixed database role
• db_backupoperator – fixed database role
No it’s not possible in SQL Server Cluster feature.
By default Unique Key column is created with a Cluster Index.
The Service Master Key is the root of the SQL Server encryption hierarchy. It is generated automatically the first time it is needed to encrypt another key. By default, the Service Master Key is encrypted using the Windows data protection API and using the local machine key. The Service Master Key can only be opened by the Windows service account under which it was created or by a principal with access to both the service account name and its password.
Yes – We can enable the trace flag 3226.
No, it’s not possible. SQL Server 2012 and all previous versions of SQL Server require databases be created on clustered
resources. Internal drives or drives which are not part of the cluster group cannot hold user databases.
A table can have one of the following index configurations:
Symmetric Key – In Symmetric cryptography system, the sender and the receiver of a message share a single, common key that is used to encrypt and decrypt the message. This is relatively easy to implement, and both the sender and the receiver can encrypt or decrypt the messages.
Asymmetric Key – Asymmetric cryptography, also known as Public-key cryptography, is a system in which the sender and the receiver of a message have a pair of cryptographic keys – a public key and a private key – to encrypt and decrypt the message. This is a relatively complex system where the sender can use his key to encrypt the message but he cannot decrypt it. The receiver, on the other hand, can use his key to decrypt the message but he cannot encrypt it. This intricacy has turned it into a resource-intensive process
An enhancement introduced in SQL Server 2012 SP1 Cumulative Update 2 is enable backup and restore from the Windows
Azure Blob storage service from SQL Server using TSQL
With the introduction of SQL Server 2012 Microsoft officially supports local disk TempDB in SQL Server cluster configurations.
CREATE INDEX name_index
ON Employee (Employee_Name, Department)
BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password'
The following are limitations specific to this release:
• The maximum backup size supported is 1 TB.
• In this implementation, you can issue backup or restore statements by using TSQL or SMO. A backup to or restoring
from the Windows Azure Blob storage service by using SQL Server Management Studio Backup or Restore wizard is not
currently enabled.
No it is not possible.
An index is structured by the SQL Server Index Manager as a balanced tree (or Btree). A B-tree is similar to an upside-down tree, means with the root of the tree at the top, the leaf levels at the bottom, and intermediate levels in between.
Each object in the tree structure is a group of sorted index keys called an index page.
All search requests begin at the root of a B-tree and then move through the tree to the appropriate leaf level.
SQL Server 2005 contains configuration tools such as a system stored procedure called sp_configure or SQL Server Surface Area Configuration tool (for services and features) in order to enable/disable optional features as needed. Those features are usually installed as disabled by default. Here is the list of the features that can be enabled using the tool:
Some typical examples include the following:
• You request a file backup of specific files, but one of the files is not online. The operation fails. To back up the online files, you can omit the offline file from the file list and repeat the operation.
• You request a partial backup, but a read/write filegroup is offline. Because all read/write filegroups are required for a partial backup, the operation fails.
• We request a full database backup, but one filegroup of the database is offline. Because all filegroups are implicitly
included in a full database backup, this operation fails.To back up this database, you can use a file backup and specify
only the filegroups that are online.
SMB stands for Server Message Block file server which can be used as a storage option starting SQL Server 2012 to store
system databases (Master, Model, MSDB, and TempDB), and Database Engine user databases .
SQL Server is divided into two main engines: the Relational Engine and the Storage Engine. The Relational Engine contains below components:
Operations that cannot run during a database backup or transaction log backup include the following:
• File-management operations such as the ALTER DATABASE statement with either the ADD FILE or REMOVE FILE options.
• Shrink database or shrink file operations. This includes auto-shrink operations.
• If you try to create or delete a database file while a backup operation is in progress, the create or delete operation fails.
Select serverproperty(‘ComputerNamePhysicalNetBIOS’)
SQL Server supports three types of checksums: a checksum on pages, a checksum in log blocks, and a backup checksum. When generating a backup checksum, BACKUP verifies that the data read from the database is consistent with any checksum or torn-page indication that is present in the database.
The BACKUP statement optionally computes a backup checksum on the backup stream; if page-checksum or torn-page
information is present on a given page, when backing up the page, BACKUP also verifies the checksum and torn-page status and
the page ID, of the page. When creating a backup checksum, a backup operation does not add any checksums to pages. Pages
are backed up as they exist in the database, and the pages are unmodified by backup.
AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level
alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a
set of user databases for an enterprise. An availability group supports a failover environment for a discrete set of user databases,known as availability databases that fail over together. An availability group supports a set of read-write primary databases and
one to four sets of corresponding secondary databases. Optionally, secondary databases can be made available for read-only
access and/or some backup operations.
No, we cannot insert data if Clustered Index is disabled because Clustered Indexes are in fact original tables which are
physically ordered according to one or more keys (Columns).
SQL Server Setup generates a configuration file named ConfigurationFile.ini, based upon the system default and run-time
inputs. The ConfigurationFile.ini file is a text file which contains the set of parameters in name/value pairs along with
descriptive comments. Many of the parameter names correspond to the screens and options which you see while installing
SQL Server through the wizard. We can then use the configuration file to install SQL Server with the same configuration
instead of going through each of the installation screens.
Backup is an important component of a sound disaster recovery strategy. Here are some best practices you can follow to ensure you have a good backup in place:
• Make sure you are not storing your backups in the same physical location as the database files. When your physical drive goes bad, you should be able to use the other drive or remote location that stored the backups in order to perform a restore. Keep in mind that you could create several logical volumes or partitions from a same physical disk drive. Carefully study the disk partition and logical column layouts before choosing a storage location for the backups.
• Make sure you have a proper backup schedule established according to the needs of the application and business requirements. As the backups get old, the risk of data loss is higher unless you have a way to regenerate all the data till the point of failure.
• Make sure to actually restore the backups on a test server and verify that you can restore with all the options and conditions you need to use during a planned or un-planned downtime.
• Use the verification options provided by the backup utilities [BACKUP TSQL command, SQL Server Maintenance Plans, your backup software or solution, etc].
• Use advanced features like BACKUP CHECKSUM to detect problems with the backup media itself.
A container for a set of databases, availability databases, that fails over together.
Clustered Index: Clustered index defines the way in which data is ordered physically on the disk. And there can only be one
way in which you can order the data physically. Hence there can only be one clustered index per table.
We can find the configuration file in the C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log folder. There will a
subfolder based on a timestampof when the SQL Server 2012 installation was done.
Yes we can save multiple copies of database backup in a single file.
A database that belongs to an availability group. For each availability database, the availability group maintains a single read-write copy (the primary database) and one to four read-only copies (secondary databases).
We can have 1 cluster index and 999 non cluster index on a table.
Based on the selected components while doing the installation we will find respective service to each component in the Windows Services. e.g. SQL Server, SQL Server Agent, SQL Analysis Services, SQL Server integration Services etc. There will be a user for each and every service through which each service will run. That use is called Service Account of that service.
Mainly we categorize the Service account as below:
Local User Account: This user account is created in the server where SQL Server is installed; this account does not have access to network resources.
Local Service Account: This is a builtin windows account that is available for configuring services in windows. This account has permissions as same as accounts that are in the users group, thus it has limited access to the resources in the server.
Local System Account: This is a builtin windows account that is available for configuring services in windows. This is a highly privileged account that has access to all resources in the server with administrator rights.
Network Service Account: This is a builtin windows account that is available for configuring services in windows. This has permissions to access resources in the network under the computer account.
Domain Account: This account is a part of our domain that has access to network resources for which it is intended to have
permission. It is always advised to run SQL Server and related services under a domain account with minimum privilege need
to run SQL Server and its related services
There are many tools available in the market for SQL server backups like
• SQL Litespeed (Dell)
• SQL Backup Pro (Redgate)
• SQL Safe Backup (Idera)
SQL Server Enterprise Edition
Maximum of 16 columns can be a part of an index.
Three copies are allowed in a Mirror backup apart from the original copy.
A heap is a table without a clustered index. Data is stored in the heap without specifying an order. Usually data is initially
stored in the order in which is the rows are inserted into the table, but the Database Engine can move data around in the heap
to store the rows efficiently; so the data order cannot be predicted.
There could be multiple reasons like:
• Permissions issues if the backups are configured to be taken on a share location
• Backup file used by the tape backups due to which backup process is not able to overwrite the backup file.
• Full backup is not taken before initiating a Diff. of Transaction log backup
• Not enough space available on the target location
Total 5-1 Primary and up to 4 Secondaries.
No, it is not required. It’s not mandatory to grant Administrator permissions to the service account.
Recovery Time Objective (RTO) is the amount of time which data or hardware is desired to be restored after a data corruption
or hardware failure.
Up to 10 availability groups is the recommendation, but it’s not enforced
This directory contains the common files used by all instances on a single computer e.g. SSMS, sqlcmd, bcp, DTExec etc. These
are installed in the folder
Recovery Point Objective (RPO) describes a point in time that data can be restored from. For instance, if there is data corruption,
Data loss or unavailability, at what point in time can a valid copy of the data be restored from? RPO is usually measured as
minutes, hours, days, weeks, etc…
Up to 100 is the recommendation, but it’s not enforced
Data is stored in the leaf-level pages of Index. The percentage of space to be filled with data in a leaf level page is decided by
fill factor. The remaining space left is used for future growth of data in the page. Fill factor is a number from 1 to 100. Its default
value is 0, which is same as 100. So when we say fill factor is 70 means, 70% of space is filled with data and remaining 30% is
vacant for future use. So higher the fill factor, more data is stored in the page. Fill factor setting is applied when we create/rebuild
index.
An instance of the Database Engine is a copy of the sqlservr.exe executable that runs as an operating system service. Each
instance manages its own system databases and one or more user databases. An instance is a complete copy of an SQL Server
installation.
TDE provides the ability to encrypt an entire database and to have the encryption be completely transparent to the
applications that access the database. TDE encrypts the data stored in both the database’s data file (.mdf) and log file (.ldf)
using either Advanced Encryption Standard (AES) or Triple DES (3DES) encryption. In addition, any backups of the database are
encrypted. This protects the data while it’s at rest as well as provides protection against losing sensitive information if the
backup media were lost or stolen.
The availability mode is a property of each availability replica. The availability mode determines whether the primary replica
waits to commit transactions on a database until a given secondary replica has written the transaction log records to disk
(hardened the log).
Below are the criteria you need to consider while choosing fill factor.
There are two types of Instances.
• Default instance
• Named Instance Each computer can run maximum of 50 instances of the Database Engine. One instance can be the default instance.
The default instance has no name. If a connection request specifies only the name of the computer, the connection is made to the default instance.
A named instance is one where you specify an instance name when installing the instance. A connection request must specify both the computer name and instance name in order to connect to the instance.
TDE requires SQL Server 2012 Enterprise edition. It’s not available in SQL Server 2012 Standard or Business Intelligence editions.
TDE is also available in SQL Server 2008 and SQL Server 2008 R2 Datacenter and Enterprise editions.
No we don’t need SQL Server Cluster instances to configure Always ON.
Server level: A generic fill factor setting is applied at the server level for all table/index. We can set a server level default fill factor by using sp_configure with a parameter as below script.
EXEC sys.sp_configure 'fill factor', 90
GO
RECONFIGURE WITH OVERRIDE
GO
At Index/Table level: While creating/rebuilding index we can set a specific fill factor. We can use below script to set fill factor while rebuilding index.
ALTER INDEX Index_name ON [SchemaName].[TableName]
REBUILD WITH (FILLFACTOR = 80);
GO
Yes, we can install multiple instances on the same disk drive because each installation creates its own folder with the below format.
MSSQL11.INSTANCENAME
Yes, some performance overhead is involved in using TDE. The encryption and decryption process do require additional CPU cycles. The overhead for using TDE ranges from about 3 percent to 30 percent, depending on the type of workload.
SQL Server instances with low I/O and low CPU usage will have the least performance impact. Servers with high CPU usage
will have the most performance impact.
This option specifies index padding. When turned ON, it uses the percentage specified by FILLFACTOR is applied to the intermediate-level and root level pages of an index.
PAD_INDEX = { ON | OFF } When ON, the percentage of free space specified by FILLFACTOR is applied to the intermediate level
pages of the index. When OFF or a FILLFACTOR value is not specified, the intermediate level pages are filled to near capacity
leaving enough space for at least one row of the maximum size the index can have, considering the set of keys on the
intermediate pages. The default is OFF.
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
Default collation: SQL_Latin1_General_CP1_CI_AS
A master key is a symmetric key that is used to create certificates and asymmetric keys.
Always ON supports below availability modes.
Indexes can be defined on views. Indexed views are a method of storing the result set of the view in the database, thereby
reducing the overhead of dynamically building the result set. An indexed view also automatically reflects modifications made to
the data in the base tables after the index is created.
• Performs real-time I/O encryption and decryption of the data and log files
• Encrypts the Entire Database in rest
• No architectural changes needed
• No application code changes are required and the user experience is the same
• Easy to implement
• DBAs can still see the data
Asynchronous-commit mode
An availability replica that uses this availability mode is known as an asynchronous-commit replica. Under asynchronous-commit mode, the primary replica commits transactions without waiting for acknowledgement that an asynchronous-commit secondary replica has hardened the log. Asynchronous-commit mode minimizes transaction latency on the secondary databases but allows them to lag behind the primary databases, making some data loss possible
Synchronous-commit mode.
An availability replica that uses this availability mode is known as a synchronous-commit replica. Under synchronous-commit
mode, before committing transactions, a synchronous-commit primary replica waits for a synchronous-commit secondary
replica to acknowledge that it has finished hardening the log. Synchronous-commit mode ensures that once a given secondary
database is synchronized with the primary database, committed transactions are fully protected. This protection comes at the
cost of increased transaction latency.
Index fragmentation is an “expected” and “unavoidable” characteristic of any OLTP environment. Fragmentation is defined as
any condition which causes more than the optimal amount of disk I/O to be performed in accessing a table, or causes the disk
I/Os that are performed to take longer than they optimally would.
Service Pack is abbreviated as SP, a service pack is a collection of updates and fixes, called patches, for an operating system or a software program. Many of these patches are often released before the larger service pack, but the service pack allows for an easy, single installation.
Patch – Publicly released update to fix a known bug/issue
Hotfix – update to fix a very specific issue, not always publicly released
• Not granular – Cannot just encrypt specific tables/columns
• Not good for high CPU bottleneck servers
• Not protected through communication/networks
The availability replica that makes the primary databases available for read-write connections from clients and, also, sends
transaction log records for each primary database to every secondary replica.
SQL Server 2012 Service Pack 2 (SP2)
Version: 11.0.5058.0
Release Date: 6/10/2014
MAXTRANSFERSIZE : specifies the largest unit of transfer in bytes to be used between SQL Server and the backup media. The
possible values are multiples of 64 KB ranging up to 4194304 bytes (4 MB). The default is 1 MB.
An availability replica that maintains a secondary copy of each availability database, and serves as a potential failover targets
for the availability group. Optionally, a secondary replica can support read-only access to secondary databases can support
creating backups on secondary databases.
Various Types of Index Fragmentation
Steps to install Service pack in Production environments:
1. First of all raise a change order and get the necessary approvals for the downtime window. Normally it takes around 45-60 minutes to install Service pack if there are no issues.
2. Once the downtime window is started, take a full backup of the user databases and system databases including the Resource database.
3. List down all the Startup parameters, Memory Usage, CPU Usage etc and save it in a separate file.
4. Install the service pack on SQL Servers.
5. Verify all the SQL Services are up and running as expected.
6. Validate the application functionality.
Note: There is a different approach to install Service pack on SQL Server cluster instances. That will be covered in SQL Server
cluster.
BUFFERCOUNT specifies the total number of I/O buffers to be used for the backup operation. The total space that will be
used by the buffers is determined by: buffercount * maxtransfersize.
A server name to which clients can connect in order to access a database in a primary or secondary replica of an AlwaysOn
availability group. Availability group listeners direct incoming connections to the primary replica or to a read-only secondary
replica.
When pages are less than fully used, the part of each page that is unused constitutes a form of fragmentation, since the table’s
or index’s rows are no longer packed together as tightly as they could be. This is known as Internal Fragmentation.
No, it’s not mandatory to restart Windows server after installing SQL Server service pack but it is always a good practice to do
so.
A continuous sequence of log backups is called a log chain. A log chain starts with a full backup of the database. Usually, a new log chain is only started when the database is backed up for the first time or after the recovery model is switched from simple recovery to full or bulk-logged recovery.
Unless you choose to overwrite existing backup sets when creating a full database backup, the existing log chain remains intact.
With the log chain intact, you can restore your database from any full database backup in the media set, followed by all
subsequent log backups up through your recovery point. The recovery point could be the end of the last log backup or a specific
recovery point in any of the log backups.
The AlwaysOn Availability Groups active secondary capabilities include support for read-only access to one or more secondary
replicas (readable secondary replicas). A readable secondary replica allows read-only access to all its secondary databases.
However, readable secondary databases are not set to read-only. They are dynamic. A given secondary database changes as
changes on the corresponding primary database are applied to the secondary database.
It usually occurs due to -
SQL Server 2008 introduced a concept that’s called “Slipstream Installation”. This is a way to deploy a SQL Server instance with
all the needed Service pack as part of the installation. Everything will be installed in one go, hence there is no need to deploy
any other service packs on the installation.
No, Full or Differential backup do not clear Transaction logs.
Directing read-only connections to readable secondary replicas provides the following benefits:
In addition, readable secondaries provide robust support for read-only operations, as follows:
SQL Server uses 8KB Pages to store data on disk. When a clustered index is created on a table, SQL Server creates a b-tree data
structure for the index and links all data pages at the leaf level of the tree in a logical order by using a doubly linked list. Logical fragmentation occurs when the pages in this doubly linked list are not contiguous in the index, meaning that indexes have pages
in which the logical ordering of pages, which is based on the key value, does not match the physical ordering inside the data file.
The procedure to install SQL Server instance through command line using ConfigurationFile.ini file in Quite mode is known as
Silent installation.
Yes, it is possible in case when you do not take Full backup of the database for months and change in the databases grow
more than the size of the Full backup.
We can have up to 2 synchronous replicas, but we are not required to use any. We could run all Secondaries in Async mode if
desired
This could happen due to -
SQL Server 2008 introduced a concept that’s called “Slipstream Installation”. This is a way to deploy a SQL Server instance with
all the needed Service pack as part of the installation. Everything will be installed in one go, hence there is no need to deploy
any other service packs on the installation.
Yes, It is mandatory to take a Full backup of the database after switching the recovery model of the database to initiate the log
chain. Otherwise Diff. or Transaction logs will fail.
Yes. An active secondary can be used to offload read-only queries from the primary to a secondary instance in the availability
group.
Extent fragmentation occurs when the extents of a table or index are not contiguous with the database leaving extents from
one or more indexes intermingled in the file.
The procedure to install SQL Server instance through command line using ConfigurationFile.ini file in Quite mode is known as
Silent installation.
We have below options to deal with the over growing transaction log file:
Freeing disk space so that the log can automatically grow.
• Backing up the log. • Adding a log file on a separate disk drive.
• Increasing the size of a log file
• killing a long-running transaction
Yes. An active secondary can be used for some types of backups
This can occur due to:
SQL Server default instance by default listen on 1433 port.
Database recovery model deals with the retention of the transaction log entries. Database recovery model decides if transaction log backups need to be triggered on a regular basis in order to keep the transaction log small or the Transaction logs will be truncated automatically.
• Simple – Committed transactions are removed from the log when the check point process occurs.
• Bulk Logged – Committed transactions are only removed when the transaction log backup process occurs.
• Full – Committed transactions are only removed when the transaction log backup process occurs.
We can use sys.dm_db_index_physical_stats DMF
Below are the methods using which we can get the port information.
Method 1: SQL Server Configuration Manager
Method 2: Windows Event Viewe
Method 3: SQL Server Error Logs Method 4: sys.dm_exec_connections DMV
Method 5: Reading registry using xp_instance_regread
SQL Server 2012 SP1 CU2, enables SQL Server backup and restore directly to the Windows Azure Blob service. Backup to cloud
offers benefits such as availability, limitless geo-replicated off-site storage, and ease of migration of data to and from the
cloud. In this release, you can issue BACKUP or RESTORE statements by using tsql or SMO. Back up to or restore from the
Windows Azure Blob storage service by using SQL Server Management Studio Backup or Restore Wizard is not available in this
release
Yes, we can take transaction log backups on the secondary replicas without COPY_ONLY option.
DMF “sys.dm_db_index_physical_stats ” has three modes -
1. DETAILED – reads all data and index pages. Be careful with using this options since it causes the entire index be read into memory and may result in IO/Memory issues
2. SAMPLED: reads 1% of the pages if more than 10,000 pages
3. LIMITED: only reads the parent level of b-tree (same as DBCC SHOWCONTIG WITH FAST). Limited option doesn’t report
page density, since it does not read the leaf level pages
FILESTREAM was introduced in SQL Server 2008 for the storage and management of unstructured data. The FILESTREAM feature
allows storing BLOB data (example: word documents, image files, music and videos etc) in the NT file system and ensures
transactional consistency between the unstructured data stored in the NT file system and the structured data stored in the
table.
A SQL Server credential is an object that is used to store authentication information required to connect to a resource outside
of SQL Server. Here, SQL Server backup and restore processes use credential to authenticate to the Windows Azure Blob
storage service. The Credential stores the name of the storage account and the storage account access key values. Once the
credential is created, it must be specified in the WITH CREDENTIAL option when issuing the BACKUP/RESTORE statements.
Within the context of a session between the primary replica and a secondary replica, the primary and secondary roles are
potentially interchangeable in a process known as failover. During a failover the target secondary replica transitions to the
primary role, becoming the new primary replica. The new primary replica brings its databases online as the primary databases,
and client applications can connect to them. When the former primary replica is available, it transitions to the secondary role,
becoming a secondary replica. The former primary databases become secondary databases and data synchronization resumes.
SQL Server error logs are stored in the below location.
Instance Root Directory\MSSQL\Log
• Flexible, reliable, and limitless off-site storage: Storing your backups on Windows Azure Blob service can be a convenient, flexible, and easy to access off-site option.
• No overhead of hardware management
• Cost Benefits: Pay only for the service that is used. Can be cost-effective as an off-site and backup archive option.
Three forms of failover exist—automatic, manual, and forced (with possible data loss). The form or forms of failover supported
by a given secondary replica depends on its availability mode,
By default, there are seven SQL Server error logs; Errorlog and Errorlog.1 through Errorlog.6. The name of the current, most
recent log is Errorlog with no extension. The log is re-created every time that you restart SQL Server. When the Errorlog file is
re-created, the previous log is renamed to Errorlog.1, and the next previous log (Errorlog.1) is renamed to Errorlog.2, and so on.
Errorlog.6 is deleted.
This job is failing because the we did not take a full database backup of the newly created database. We need to a full backup
of a database to initiate the log chain.
SQL Server 2008 onwards we can use REBUILD command to remove fragmentation from heap.
ALTER TABLE…REBUILD (SQL 2008+).
Yes it is possible to change the no. of Error logs retention. We can follow the below steps to change the Error log file retention.
1. Open SQL Server Management Studio and then connect to SQL Server Instance
2. InObject Explorer, ExpandManagement Node and then right click SQL Server Logs and click Configure as shown in the snippet below.
3. In Configure SQL Server Error Logs window you can enter the value between 6 and 99 for the number of error logs and click OK to save the changes
It’s very open ended Question:
As per my understanding, normally Organizations follow below standards
Prod/QA Environment:
For Small size databases (<200 GB)
• Full Backup – Daily
• Differential Backups – NA
• Transaction Log backups – Every 10-15 minutes depending upon the RPO
For Large size databases (>=200 GB)
• Full Backup – Weekly
• Differential Backups – Daily
• Transaction Log backups – Every 10-15 minutes depending upon the RPO
Again it all depends upon the criticality of the database e.g. for Data warehousing databases it may be possible that there is no
requirement of Transaction log backups.
A manual failover occurs after a database administrator issues a failover command and causes a synchronized secondary replica
to transition to the primary role (with guaranteed data protection) and the primary replica to transition to the secondary role.
A manual failover requires that both the primary replica and the target secondary replica are running under synchronouscommit mode, and the secondary replica must already be synchronized.
When the query optimizer generates a query plan, it analyzes what are the best indexes for a particular filter condition. If the
best indexes do not exist, the query optimizer generates a suboptimal query plan, but still stores information about these
indexes. The missing indexes feature enables you to access information about these indexes so you can decide whether they
should be implemented
Authentication is the process of verifying who you are. Logging on to a PC with a username and password is authentication.
Authorization is the process of verifying that you have access to something. Authorization is gaining access to a resource (e.g.
directory on a hard disk) because the permissions configured on it allow you to access it.
Again it all varies on organization’s policies. As a general practice it’s always better to keep the database backups on the shared
location (In the Data Center) at least for 2-3 days. Daily backups should be written to tape backups and should be retained for a
month or so. Also as a preventive measure there should be monthly or bi monthly backups with a retention policy of minimum
one year.
An automatic failover occurs in response to a failure that causes a synchronized secondary replica to transition to the primary
role (with guaranteed data protection). When the former primary replica becomes available, it transitions to the secondary role.
Automatic failover requires that both the primary replica and the target secondary replica are running under synchronous-commit mode with the failover mode set to “Automatic”. In addition, the secondary replica must already be synchronized, have
WSFC quorum, and meet the conditions specified by the flexible failover policy of the availability group.
There are two type of authentication available in SQL Server.
Windows Authentication — TRUSTED connection
Windows Logins
Windows Groups
MIXED authentication — NON Trusted connection
Windows Logins
Windows Groups
SQL Server logins
Database Maintenance Plans allow us to automate many database administration tasks in Microsoft SQL Server. Maintenance
plans create a workflow of the tasks required to make sure that our database is optimized, regularly backed up, and free of
inconsistencies.
SQL Server Failover Cluster Instances (FCIs) do not support automatic failover by availability groups, so any availability replica
that is hosted by an FCI can only be configured for manual failover.
sys.dm_db_index_usage_stats – Returns counts of different types of index operations and the time each type of operation was
last performed.
Windows authentication mode requires users to provide a valid Windows username and password to access the database server. In enterprise environments, these credentials are normally Active Directory domain credentials.
Mixed authentication mode allows the use of Windows credentials but supplements them with local SQL Server user accounts
that the administrator may create and maintain within SQL Server
SQL Server 2012 maintenance plans provide the following features:
• Workflow creation using a variety of typical maintenance tasks. You can also create your own custom Transact-SQL scripts.
• Conceptual hierarchies. Each plan lets you create or edit task workflows. Tasks in each plan can be grouped into subplans, which can be scheduled to run at different times.
• Support for multiserver plans that can be used in master server/target server environments.
• Support for logging plan history to remote servers.
• Support for Windows Authentication and SQL Server Authentication. When possible, use Windows Authentication.
Only form of failover is forced manual failover (with possible data loss), typically calledforced failover. Forced failover is
considered a form of manual failover because it can only be initiated manually. Forced failover is a disaster recovery option. It
is the only form of failover that is possible when the target secondary replica is not synchronized with the primary replica.
When we create or rebuild an index, by setting the SORT_IN_TEMPDB option to ON we can direct the SQL Server Database
Engine to use tempdb to store the intermediate sort results that are used to build the index. Although this option increases the
amount of temporary disk space that is used to create an index, the option could reduce the time that is required to create or
rebuild an index when tempdb is on a set of disks different from that of the user database.
Windows authentication is definitely more secure as it’s controlled and authenticated by Active Directory policies.
We have below options in the Maintenance Wizard.
1. Check Database Integrity – checks the logical and physical integrity of all objects in the database
2. Shrink Database – shrinks the files and the logs in the specified database on a file-by-file basis. It pools log files. Physically shrinking makes the file smaller and truncates a file in the backup.
3. Reorganize Index – uses minimal system resources and defragments indexes of tables and views by reordering the pages to match the logical order
4. Rebuild Index – drops and recreates the index reclaiming space and reorders the index rows in contiguous pages
5. Update Statistics – updating statistics ensures queries run with up-to-date statistics, but they also require queries to be recompile, can result in a performance trade-off
6. Clean Up History – deletes entries in the history tables
7. Execute SQL Server agent job – Windows service that runs scheduled jobs
8. Backup Database (Full) – Backs up the entire database, but as databases grow, the backup takes longer and requires more storage. The best solution is to use in conjunction with differential backups.
9. Backup Database (Differential) – used to back up only the data that has changed since the last full backup
10. Backup Database (Transaction Log) – used to back up individual transaction logs, which can be individually restored since the last full backup
11. Maintenance clean up task – performs housekeeping functions
Database administrators use the AlwaysOn Dashboard to obtains an at-a-glance view the health of an AlwaysOn availability group and its availability replicas and databases in SQL Server 2012. Some of the typical uses for the AlwaysOn Dashboard are:
Principals are entities that can request SQL Server resources. A Windows Login is an example of an indivisible principal, and a Windows Group is an example of a principal that is a collection. Every principal has a security identifier (SID). e.g.
Windows-level principals
• Windows Domain Login
• Windows Local Login
SQL Server-level principals
• SQL Server Login
• Server Role
Database-level principals
• Database User
• Database Role
• Application Role
msdb database contains information about all the Database Maintenance plans.
Availability Group Wizard is a GUI using SQL Server Management Studio to create and configure an AlwaysOn availability
group in SQL Server 2012.
Data compression can be configured for the following database objects:
Securable are the resources to which the SQL Server Database Engine authorization system regulates access. For example, a
table is a securable. Some securable can be contained within others, creating nested hierarchies called “scopes” that can
themselves be secured. The securable scopes are server, database, and schema.
• sysdbmaintplan_databases: Contains one row for each database that has an associated upgraded database maintenance plan.
• sysdbmaintplan_history: Contains one row for each upgraded database maintenance plan action performed.
• sysdbmaintplan_jobs: Contains one row for each upgraded database maintenance plan job.
• Sysdbmaintplans: Contains one row for each upgraded database maintenance plan
Securable scope: Server –The server securable scope contains the following securable:
• Endpoint
• Login
• Server role
• Database
Securable scope: Database –The database securable scope contains the following securable:
• User
• Database role
• Application role
• Assembly
• Message type
• Route
• Service
• Remote Service Binding
• Full text catalog
• Certificate
• Asymmetric key
• Symmetric key
• Contract
• Schema
Securable scope: Schema –The schema securable scope contains the following securables:
• Type
• XML schema collection
• Object – The object class has the following members:
o Aggregate
o Function
o Procedure
o Queue
o Synonym
o Table
o View
Yes we can change the Database Maintenance plan anytime.
Data compression is implemented at two levels: ROW and PAGE.
A login is the principal that is used to connect to the SQL Server instance. A user is the principal that is used to connect to a database.
The security context on the instance itself is dictated by the login, its roles and the permissions granted/denied. The security
context on the database is dictated by the user, its roles and the permissions granted/denied.
Yes, we can run TSQL Statement in a Database Maintenance Plan.
The benefits of using a non-clustered column store index are:
SQL Server 2005 introduced the concept of database schemas and the separation between database objects and ownership by
users. An object owned by a database user is no longer tied to that user. The object now belongs to a schema – a container
that can hold many database objects. Schema as a collection of database objects that are owned by a single principal and form
a single namespace
Yes, we run a SQL Server Agent job in a Database Maintenance Plan.
Yes, it can be configured.
DBCC stands for database consistency checker. There are many DBCC command in SQL Server. We generally use these
commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
Bulk Admin: Members of this role can perform Bulk Insert operations on all the databases.
DB Creator: Members of this role can Create/Alter/Drop/Restore a database.
Disk Admin: Members can manage disk files for the server and all databases. They can handle backup devices.
Process Admin: Members of this role can manage and terminate the processes on the SQL Server.
Server Admin: Members of this role can change Server-wide configurations and shutdown SQL Server instance.
Setup Admin: Members of this role can Add/Remove Linked Servers.
Security Admin: Members of this role can create/manage Logins, including changing and resetting passwords as needed, and managing GRANT, REVOKE and DENY permissions at the server and database levels.
SysAdmin: Members of this role have Full Control on the instance and can perform any task.
Public: Public is another role just like Fixed Server Roles, that is by default granted to every login (Windows/SQL)
The History Cleanup task deletes entries in the following history tables in the SQL Server msdb database.
• backupfile
• backupfilegroup
• backupmediafamily
• backupmediaset
• backupset
• restorefile
• restorefilegroup
• restorehistory
By using the History Cleanup task, a package can delete historical data related to backup and restore activities, SQL Server
Agent jobs, and database maintenance plans.