博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
在SQL Server中读取事务日志-从黑客到解决方案
阅读量:2512 次
发布时间:2019-05-11

本文共 14753 字,大约阅读时间需要 49 分钟。

The SQL Server transaction log is akin to a ‘Black box’ in an airliner. It contains all of the records of transactions made against a database. This information is a proverbial goldmine for database audits, recoveries etc but it was never meant to be exposed to end users let alone visualized in an easy to read manner nor used for DBA tasks. As such, utilizing this information can be a challenge, to say the least.

SQL Server事务日志类似于客机中的“黑匣子”。 它包含针对数据库进行的所有交易记录。 该信息是用于数据库审计,恢复等的众所周知的金矿,但绝不意味着将其暴露给最终用户,更不用说以易于阅读的方式将其可视化或用于DBA任务。 这样,至少可以说,利用这些信息可能是一个挑战。

To illustrate, let’s open up a LDF file in a hex editor and see what’s inside. As you can see, this is data never meant to be processed directly by humans, at least

为了说明这一点,让我们在十六进制编辑器中打开LDF文件,然后查看其中的内容。 如您所见,这是至少永远不会被人类直接处理的数据

Despite the challenges inherent in the data format of LDF and TRN files there are some solutions to de-mystify this information, un-obfuscating it to a point where it can begin to be useful. We’ll examine a few of these in this article

尽管LDF和TRN文件的数据格式存在固有的挑战,但仍有一些解决方案可以使此信息变得神秘,从而避免混淆信息,使其开始有用。 我们将在本文中研究其中一些

使用fn_dblog (Using fn_dblog)

fn_dblog is a SQL Server function, albeit undocumented, that can be utilized to decrypt the active region of an online SQL Server transaction log

fn_dblog是SQL Server函数(尽管未记录),可用于解密联机SQL Server事务日志的活动区域

Let’s take a walk through this solution and examine the results

让我们来看一下该解决方案并检查结果

  1. Run fn_dblog

    运行fn_dblog

    Select * FROM sys.fn_dblog(NULL,NULL)

    The results include 129 columns, by default. So our next step will be to narrow down the result set to something more manageable, like for a certain transaction type, specifically e.g. Inserts

    默认情况下,结果包括129列。 因此,我们的下一步将是将结果集缩小到更易于管理的范围,例如某种交易类型,例如,插入

  2. Narrow results to match our search

    符合我们搜索条件的窄结果

    So let’s drill down into the results to get a more precise set of information

    因此,让我们深入研究结果以获得更精确的信息

    To see transactions for inserted rows, run:

    要查看插入行的事务,请运行:

    SELECT [Current LSN],        Operation,        Context,        [Transaction ID],        [Begin time]       FROM sys.fn_dblog   (NULL, NULL)  WHERE operation IN   ('LOP_INSERT_ROWS');

    Similarly, we can do the same for deleted rows

    同样,我们可以对已删除的行执行相同的操作

    SELECT [begin time],        [rowlog contents 1],        [Transaction Name],        Operation  FROM sys.fn_dblog   (NULL, NULL)  WHERE operation IN   ('LOP_DELETE_ROWS');

  3. Find the appropriate columns that store the values

    查找存储值的适当列

    To find the columns that store values for inserted or deleted rows, look for these headers ‘RowLog Contents 0’, ‘RowLog Contents 1’, ‘RowLog Contents 2’, ‘RowLog Contents 3’, ‘RowLog Contents 4’, ‘Description’ and ‘Log Record’

    要查找存储已插入或已删除行的值的列,请查找以下标头“ RowLog目录0”,“ RowLog目录1”,“ RowLog目录2”,“ RowLog目录3”,“ RowLog目录4”,“描述”和“日志记录”

    The challenge is that data is scattered across different columns for different operations. This requires the user to map the columns for each transaction type. Since this is an undocumented function, no such map or documentation exists, making reading the proverbial tea leaves that are the results of fn_dblog quite difficult.

    挑战在于数据分散在不同的列中以进行不同的操作。 这要求用户映射每种交易类型的列。 由于这是一个未记录的函数,因此不存在这样的映射或文档,这使得读取fn_dblog结果产生的众所周知的茶叶非常困难。

    To compound the difficulty further, rows for deleted and inserted data are presented as hexadecimal. Converting this data to meaningful information requires deep knowledge of the format, status bits, total columns etc. Reading Egyptian hieroglyphics might be easier in some cases

    为了进一步增加难度,已删除和已插入数据的行以十六进制表示。 将此数据转换为有意义的信息需要对格式,状态位,总列数等有深刻的了解。在某些情况下,阅读埃及象形文字可能会更容易

  4. Convert the data to usable information

    将数据转换为可用信息

    To complete our process, we must convert the binary data to textual/table data taking into consideration the data type of each column. Conversion mechanisms will vary depending on the column, data type etc

    为了完成我们的过程,我们必须考虑到每一列的数据类型,将二进制数据转换为文本/表数据。 转换机制将根据列,数据类型等而有所不同

    Fn_dbLog is a very powerful and useful tool but it does require an expert level user to be utilized to its full extent. And even so it has some quite significant limitations including

    Fn_dbLog是一个非常强大且有用的工具,但它确实需要充分利用专家级别的用户。 即使如此,它也有一些相当大的限制,包括

    1. DDL auditing is complicated and involves reconstructing the state of certain system tables

      DDL审核很复杂,涉及重建某些系统表的状态
    2. Only the active part of the online transaction log can be read

      只能读取在线交易日志的活动部分
    3. There is no means to reconstruct Update DML transactions or BLOBs (see next)

      没有办法重建Update DML事务或BLOB(请参见下一个)

    Update operations in SQL Server are not fully logged in the transaction log. Full before-and-after values, unfortunately don’t exist, only the delta of the change for that record. For example, SQL Server may show a change from “H” to “M” when the actual record that was changed was from “House” to “Mouse”. To piece together the full picture a process must be devised to manually reconstruct the history of changes, including the state of the record prior to update. This requires painstakingly re-constructing every record from the original insert to the final update, and everything in between.

    SQL Server中的更新操作未完全记录在事务日志中。 不幸的是,之前和之后的完整值不存在,仅存在该记录的更改增量。 例如,当更改的实际记录是从“房屋”更改为“鼠标”时,SQL Server可能会显示从“ H”更改为“ M”。 为了拼凑整个图片,必须设计一个过程来手动重建更改的历史记录,包括更新之前的记录状态。 这就需要认真地重建从原始插入到最终更新的所有记录,以及介于两者之间的所有内容。

    BLOBs are another challenge when trying to use fn_dblog to read transaction history. BLOBs, when deleted, are never inserted into the transaction log. So examining the transaction log won’t provide information about its existence unless the original insert can be located. But only by combining these two pieces of data will you be able to recover a deleted BLOB file. This obviously requires that the original insert exists in the active/online portion of the transaction log, the only part accessible to fn_dblog. This may be problematic if the original insert was done some weeks, months or years earlier and the transaction log has been subsequently backed up or truncated

    尝试使用fn_dblog读取事务历史记录时,BLOB是另一个挑战。 BLOB删除后,永远不会插入到事务日志中。 因此,除非可以找到原始插入内容,否则检查事务日志将不会提供有关其存在的信息。 但是,仅通过组合这两部分数据,您就可以恢复已删除的BLOB文件。 显然,这显然要求原始插入物存在于事务日志的活动/在线部分中,这是fn_dblog唯一可访问的部分。 如果原始插入是在数周,数月或数年之前完成的,并且随后备份或截断了事务日志,则可能会出现问题。

使用fn_dump_dblog (Using fn_dump_dblog)

Another solution exists, in the form of fn_dump_dblog that overcomes the limitation of fn_dblog, in that it can only read the active/online transaction log.

存在另一种解决方案,它以fn_dump_dblog的形式克服了fn_dblog的局限性,因为它只能读取活动/联机事务日志。

Fn_dump_dblog is able to read native transaction log backups

Fn_dump_dblog能够读取本机事务日志备份

Please keep in mind that this function is also undocumented

请记住,此功能也未记录

  1. Run fn_dump_dblog for a specific transaction log backup.

    运行fn_dump_dblog以获得特定的事务日志备份。

    Please note that you have to specify all parameters, 63 in this case

    请注意,您必须指定所有参数,在这种情况下为63

    SELECT *FROM fn_dump_dblog(NULL,NULL,N'DISK',1,N'F:\Mybackups\InventoryDatabase_4589725r.trn', DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, DEFAULT);

    The same as with fn_dbLog, 129 columns are returned, so returning only the specific ones is recommended

    与fn_dbLog相同,返回129列,因此建议仅返回特定的列

    SELECT [Current LSN], Context, [transaction name], Operation, [Transaction ID], DescriptionFROM fn_dump_dblog(NULL,NULL,N'DISK',1,N' F:\Mybackups\InventoryDatabase_4589725r.trn', DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT, DEFAULT);

    Once again we need to dust off our hex editor to decipher this information and we have similar challenges and obstacles with Updates and BLOBs

    再一次,我们需要清除十六进制编辑器以解密此信息,并且在Updates和BLOB中也遇到了类似的挑战和障碍

    But using this function, there is a potentially very helpful and powerful feature available as you are able to restore a database, to a specific point in time, before a particular operation occurred. By doing this, in essence, you could “recover” from a bad/rogue transaction by going back in time and eliminating it

    但是使用此功能,可能会提供非常有用和强大的功能,因为您可以在执行特定操作之前将数据库还原到特定时间点。 通过这样做,从本质上讲,您可以通过及时返回并消除它来从不良/无赖交易中“恢复”

    To do this you can follow these steps

    为此,您可以按照以下步骤

  2. Determine the LSN for the particular transaction you want to “reverse”

    确定您要“撤消”的特定交易的LSN
  3. Convert this LSN into the format used in the WITH STOPBEFOREMARK = ‘<mark_name>’ clause, For example, 00000070:00000011:0001 should be transformed into 112000000001700001

    将此LSN转换为WITH STOPBEFOREMARK ='<mark_name>'子句中使用的格式,例如,00000070:00000011:0001应转换为112000000001700001
  4. RESTORE LOG InventoryDatabase	FROM	    DISK = N' F:\Mybackups\InventoryDatabase_4589725r.trn'	WITH	    STOPBEFOREMARK = 'lsn:112000000001700001',    NORECOVERY;

使用DBCC PAGE (Using DBCC PAGE)

Next on our list is DBCC PAGE, another useful command to read the SQL Server transaction log, which is able to read only MDF and LDF database files. Although, like the others it is also undocumented

我们列表的下一个是DBCC PAGE,这是另一个读取SQL Server事务日志的有用命令,该命令只能读取MDF和LDF数据库文件。 尽管像其他文件一样,它也没有记录

DBCC PAGE ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ])

Use the following syntax to read the first page in our example database’s online transaction log:

使用以下语法读取示例数据库的在线事务日志中的第一页:

SELECT FILE_ID ('InventoryDatabase_Log') AS 'File ID' -- to determine Log file ID = 2DBCC PAGE (InventoryDatabase, 2, 0, 2)

The results will be as follows

结果如下

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

The output is not displayed, by default unless you turn on the 3604 trace flag first. Once set the output will be displayed

默认情况下,除非您先打开3604跟踪标志,否则不显示输出。 设置后,将显示输出

DBCC TRACEON (3604, -1)

Now re-execute

现在重新执行

DBCC PAGE (InventoryDatabase, 2, 0, 2)

The result is quite a bit of errors, bad headers and other visual noise. This can be ignored but the results will still be in hexadecimal format

结果是相当多的错误,错误的标题和其他视觉噪声。 可以忽略,但结果仍将采用十六进制格式

Hexadecimal output is really not meant to be interpreted by humans, visually, which makes converting this data into meaningful decisions difficult

十六进制输出实际上并不意味着人类会从视觉上进行解释,这使得将这些数据转换为有意义的决策非常困难

使用ApexSQL日志 (Use ApexSQL Log)

ApexSQL Log is a well-known, 3rd party , auditing and recovery tool. It can read online, detached transaction logs as well as transaction log backups, including compressed.

ApexSQL Log是一个众所周知的, 第三方 ,审计和恢复工具。 它可以读取联机的独立事务日志以及事务日志备份,包括压缩的备份。

ApexSQL Log can read all transactions, including DDL and DML, in a chain of transaction log data sources and create scripts to Undo or Redo them. It can even read transactions before the product was installed.

ApexSQL Log可以读取一系列事务日志数据源中的所有事务,包括DDL和DML,并创建脚本来撤消或重做它们。 它甚至可以在安装产品之前读取事务。

ApexSQL Log, unlike the aforementioned solutions, provides significant value added capabilities to log reading, by processing encrypted and obfuscated log data into easily processed and understood information, including the object, dates and times, and change history including before and after values

与上述解决方案不同,ApexSQL日志通过将加密和混淆的日志数据处理为易于处理和易于理解的信息(包括对象,日期和时间以及包括值前后的更改历史记录),提供了显着的增值功能来读取日志

To use this tool, follow these steps

要使用此工具,请按照下列步骤操作

  1. ApexSQL Log application ApexSQL Log应用程序
  2. Connect to a database. Normally this is the database who’s transaction logs you want to read, but it doesn’t have to be necessarily, as you can add offline and backed up logs for other databases

    连接到数据库。 通常,这是您要读取的事务日志的数据库,但不必一定要这样,因为您可以为其他数据库添加脱机和备份日志

  3. Select the logs you want to read in the Select datasources step ensuring to construct a . Use the Add button to add other datasources like detached LDF files and transaction log backups

    在“ 选择数据源”步骤选择要读取的日志,以确保构建 。 使用“添加”按钮添加其他数据源,例如分离的LDF文件和事务日志备份

  4. Select the output type, in the next step. To view the data in a grid, select Open results in grid. This will populate the ApexSQL Log visual grid with the contents of the transaction log, in an easy to read manner and offer lots of valued added processing features like sorting, search etc

    在下一步中选择输出类型。 要查看网格中的数据,请选择“ 在网格中打开结果” 。 这将以一种易于阅读的方式,用事务日志的内容填充ApexSQL Log可视网格,并提供许多有价值的处理功能,例如排序,搜索等。

  5. To refine your results, use the Filter setup, to filter by operation type e.g. DML, object or user name, time ranges and more

    要优化结果,请使用过滤器设置 ,以按操作类型(例如DML,对象或用户名,时间范围等)进行过滤

  6. Click the Finish button

    单击完成按钮

    Detailed results, but without the visual noise and extraneous data, are presented in an easy to understandable and easy to use manner

    以易于理解和易于使用的方式呈现了详细的结果,但没有视觉噪音和多余的数据

    Now you can really decipher the transaction log and unlock its true value. You will be able to see the time the operation began and ended, the operation type, the schema and object name of the object affected, the name of the user who executed the operation, the computer and application used to execute the operation. For UPDATEs, you’ll see the old and the new value of the updated fields aka before-and-after auditing.In addition, you can use ApexSQL Log, for disaster recovery purposes, to easily create Undo scripts to reverse batches of transactions or Redo scripts to replay .

    现在,您可以真正解密事务日志并解锁其真实价值。 您将能够看到操作开始和结束的时间,操作类型,受影响对象的模式和对象名称,执行操作的用户名称,用于执行操作的计算机和应用程序。 对于UPDATE,您将看到审计前后的更新字段的旧值和新值。此外,您可以使用ApexSQL Log进行灾难恢复 ,轻松创建撤消脚本以撤消批量事务或重做脚本以重播 。

    ApexSQL Log can be set up, in mode, to run unattended to continuously write the transaction log back to the database itself, with no gaps in records or duplicated records, so the transaction log data can be queried just like the

    可以在模式下将ApexSQL Log设置为在无人值守的情况下运行,以将事务日志连续写回到数据库本身,而记录中没有空缺或没有重复记录,因此可以像查询一样查询事务日志数据

    For database replication, ApexSQL Log can easily be configured to read the transactions from one database and write to another

    对于数据库复制 ,可以轻松配置ApexSQL日志以从一个数据库读取事务并写入另一个数据库

    To avoid hex values, undocumented functions, unclear column content, long queries, complex action steps, incomplete UPDATE and BLOB reconstruction when reading SQL Server transaction logs, use . It will read transaction logs for you and present the results “in plain English”. Besides that, undo and redo scripts, database replication and continuous auditing are just a click away

    为了避免在读取SQL Server事务日志时遇到十六进制值,未记录函数,列内容不清楚,查询长,操作步骤复杂,UPDATE和BLOB重建不完整的情况,请使用 。 它将为您读取交易日志并以“普通英语”显示结果。 除此之外,只需单击一下,即可撤消和重做脚本,数据库复制和连续审核

翻译自:

转载地址:http://auiwd.baihongyu.com/

你可能感兴趣的文章
10 递归
查看>>
git初学【常用命令、上传项目到码云或从码云拉取、克隆项目】
查看>>
LUOGU P3723 [AH2017/HNOI2017]礼物 (fft)
查看>>
AFNetworking 返回错误unsupported media type (415) 解决方案
查看>>
在腾讯云上创建您的SQL Cluster(4)
查看>>
部署在腾讯云的公益网站遭受了一次CC攻击
查看>>
linux ping命令
查看>>
Activiti源码浅析:Activiti的活动授权机制
查看>>
数位dp整理
查看>>
UNIX基础知识
查看>>
bzoj 1179: [Apio2009]Atm
查看>>
利用LDA进行文本聚类(hadoop, mahout)
查看>>
第三周作业
查看>>
js添加删除行
查看>>
浏览器性能测试网址
查看>>
[MTK FP]用Python把图片资源image.rar中为.pbm后缀的文件更改为.bmp后缀的方法
查看>>
实验二
查看>>
[LeetCode]203. Remove Linked List Elements 解题小结
查看>>
HDU 1847 [Good Luck in CET-4 Everybody!] 博弈
查看>>
测试一下
查看>>