I have a relatively simple web app set up with a SQL database, and we have log analytics. We have "Auditing" turned on for the SQL server, and for some reason I'm seeing 2.8GB of usage a day just from the AzureDiagnostics table (SQLSecurityAuditEvents category). Looking into the logs, I'm seeing a long list of RPC_COMPLETED events, which isn't listed in the default events that are audited according to Azure. There is an event every 50ms or so, and the statement is "exec sp_execute 34,3143442" (or something very similar).
I've tried searching online and can't find anything about what this could be - but it's making our log analytics pretty expensive. Has anyone else found this and got to the bottom of it?
-
Im seeing the same - any ideas?gisWeeper– gisWeeper2022年09月05日 13:06:48 +00:00Commented Sep 5, 2022 at 13:06
-
1Hi @gisWeeper, mine was due to auditing being enabled in Azure. Apparently this logs a huge amount of data and increases the cost a fair bit.nathjc– nathjc2022年09月06日 14:14:14 +00:00Commented Sep 6, 2022 at 14:14
-
Yeah that was exactly the same problem for me. Auditing was enabled on database and was logging huge amount of data. Upon analysing this data I found some poor code implementation - a number of queries in Entity Framework looping hundreds of time = lots of audits.gisWeeper– gisWeeper2022年09月27日 12:17:27 +00:00Commented Sep 27, 2022 at 12:17
1 Answer 1
The thread is old, but I'll give my 5 cents for future reference.
We have exactly the same problem when using Azure SQL database from Databricks. We solved the issue by using alternative driver instead of the one readily included in the Databricks runtime. The Spark driver is this one: https://mvnrepository.com/artifact/com.microsoft.azure/spark-mssql-connector
The new driver was able to give "bulk insert" type logging reducing the log size from gigabytes to megabytes.
So, as a pointer: You might want to try out some other driver which is able to work in more "bulk insert" level.
Explore related questions
See similar questions with these tags.