Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
Dynamic Debugging and Instrumentation of Production PL/SQL
Transcript of Dynamic Debugging and Instrumentation of Production PL/SQL
who knows where you live.” – origin unverifiable
Dynamic Debugging &
Instrumentation of Production PL/SQL
PL/SQL enthusiast. 16 yrs
Andersen Consulting – SF, Denver
New Global Telecom – Golden
Structure Consulting Group – Houston
Church of Jesus Christ of Latter Day Saints
RMOUG, IOUG and UTOUG. 10 yrs
How did the users/mgmt appreciate your handling of the issue?
Become aware of a problem
Find the source of the problem
Fix the source of the problem
Repair issues the problem caused
Rebuild trust (costliest, longest step)
Do things better so problem doesn’t happen again
There's always another bug.
Test with dirty data and plenty of it.
Get another pair of eyes on it.
Document the code well.
Involve users early and often.
Reduce and Reuse. Don't Repeat.
Options without instrumentation:
Options with instrumentation:
Review what happened
Replicate and monitor in real-time
Proactively analyze and notify
Instrumentation is a big word...
But more familiar than it seems.
The process of fitting production applications with code that directs
where it can be
Who, when, what was passed in, what changed, time taken, errors and warnings encountered, etc.
(disabled by default)
(enabled by default)
Direct runtime context to...
log file on the database host
queue for asynchronous re-direction
DBMS pipe or alert
Other slower, more complex alternatives like HTTP, FTP and UDP callouts.
A mouthful , that's not fun to type repeatedly...
Nothing until 11g Flashback Data Archive (Total Recall)
Basic AUDIT and FGA seem promising.
But they are not.
But it i$ an extra co$t option...
...and can be too slow to retrieve changes.
returns elapsed time from an arbitrary point in the past known internally to Oracle
result is in hundredths of seconds
divide result by 100 for seconds
get time, do work, get time again, subtract the difference to get response time
requires existence of three PLSQL_PROFILER* tables
quickly identify bottlenecks
better for development and tuning
less awesome for current Production problems
Most build custom triggers to capture change, and tables to hold the history.
Logging & Debugging
"poor man's" debugger
lazy (and awful) debugger
output goes to buffer
buffer must be retrieved by client tool
DBMS_DEBUG & DBMS_DEBUG_JDWP
since Oracle 7 (DBMS_PROBE then)
plumbing for traditional IDE interactive debugging
need DEBUG CONNECT SESSION
JDWP flavor supports "remote" debugging
unfortunately, client has to be coded for this
objects must be compiled with debug info
don't leave this in Production
love it, but not for Production
IDE implementations not stable enough
(Rarely, if ever)
"undocumented" but well-known
real-time peeking into active sessions!
Oracle Support: don't use unless instructed
used to create a special DML error logging table
waiting client can’t see the desired event message until after the alerting session commits
resource intensive (client blocks, server needs lock and pipe) PER SESSION!
takes too long to re-register and return to waiting (miss messages)
messages limited to 1800 chars
session metadata, like client_id, can't be shared from alerting session to waiting session
packing messages cumbersome
does not guarantee delivery like AQ
message can never be read more than once
write to trace or alert log
need access to DBMS_SYSTEM
each new line interpreted as entirely separate message
can't control message formatting
special characters escaped for HTML
set_module() //48 bytes
can take module and action as inputs
set_action() //32 bytes
set_client_info() //64 bytes
track own scripts, DDL operations or DML statements
tags can be too durable: stale, wrong
nested routines with instrumentation overwrite tags of calling routine
Session & DB Metadata
Simple API to clock and record metrics
Should handle nested timers
Simple API to tag sessions and long operations
Should handle nested tagging
Simple API to write files
Simple API for static & dynamic log messages
Standard method of handling exceptions
Auto-gather caller, client & session metadata
Column-level auditing structures and triggers
Off by default, and low overhead, so insightful debug lines can remain in Prod code
Can be turned on and off without Prod interruption
Toggles kept in a table or application context
Turn on for a PL/SQL unit or list of units, session, end user or named process, IP address, domain
dbg(‘Calling X with ‘||i_parm);
info(‘BEGIN: Nightly Reconcile’);
warn(‘X took ‘||l_diff||’ s too long’);
startT(); <stuff> stopT(); elapsed();
Choice of Output
Minimally: to table and screen
Optionally: to file
Nice to have: ftp, pipe, AQ, http, etc.
Output must be transaction-independent
puts end user ID into DB (AUDIT, V$SESSION, alert, trace, DBMS_MONITOR, etc.
•who called the routine, their IP address, client machine name, and client OS user name
•when they called the routine, how long the subroutines took, and how long the entire call took
•what the call accomplished, the number of iterations, the current iteration, the value(s) passed back
•parameter values passed into the routine, as well as changing and derived variables determined within
•old and new column values if data is changing
•path the call took through the control statements
•anomalies, warnings and errors detected, etc.
"Starter" Framework Too Much?
Thousands of downloads, but not much feedback or developer contributions.
Too many moving parts.
Too complex for shops with one app per schema or one app per DB.
60 page doc and days to week learning curve
Security often done in directory server now
Common messages almost never used
Email-from-DB tables rarely used
Locking always needs customization
"Simple" Starter Framework
7 services and 17 objects
Single app schema
Download and Install
Go to http://sourceforge.net
Search for PL/SQL framework.
Select first project (sourdough loaf icon)
Select Browse All Files.
Drill to plsqlfmwksimple/2.1.
Download and unzip Simple.zip
Start SQL*Plus as SYS
If installing to existing scheme, remove DROP and CREATE USER statements.
err(), warn(), info(), dbg()
startme(), stopme(), elapsed()
Generates trigger for every table in your schema.
Remove triggers not needed.
Remove auditing on columns not needed.
Audited changes are recorded to APP_CHG_LOG and APP_CHG_LOG_DTL
May need view or materialized view to simplify access to audit data.
log elapsed times using logs.info()
Create separate automated processes to monitor metrics, learn from them over time, and notify when anomalies are detected.
info(msg), warn(msg), err(msg)
record important data, expected and unexpected error conditions
to document code and leave hooks for dynamic, real-time logging
set_dbg (boolean and directed)
Parameter values table-driven
Parameters in APP_PARM
Debug (on/off, session, unit, user)
Debug Toggle Check Interval (in minutes)
Default Log Targets (Screen=N,Table=Y,File=N)
Parameters can be temporarily overridden through logs.set* routines
Monitor & Trace
tag/untag to modify module, action and client_info
tag_longop to track long operations
init/reset_client_ctx to set/unset client_identifier and other client metadata if desired
Performance of Log Destinations
Screen (10K msgs = 1 sec)
Quick-and-dirty testing and debugging.
Log Table (10K msgs = 4 sec)
A default job keeps the table trimmed to a couple weeks of data.
File (10K msgs = 15 sec)
Pipe (10K msgs = 8 sec + 4 sec to log them)
Putting it All Together
Write and document public interface.
Write tests that all fail.
Write body in pseudo-code.
Fill in the algorithm, making sure routine does one thing and one thing well. Ensure it uses assertions to check assumptions. Clean. To standard. Formatted.
Wrap pseudo-code with log and debug calls, adding a little runtime context. Voila! 3-birds with one stone.
Then I run the tests until they all work, using the instrumentation and metrics if there is trouble.
Instrumentation should be in place before production problems occur.
But it can be added easily after as well.
Adopt or build a standard library.
It must be simple and easy to use.
Encourage or enforce its use.
Do it today! It’s easy and rewarding.
Papers & Presentations:
Could by kept and immediately changed in a global application context, but this is not supported in a RAC environment until 11gR2.
Strictly DBA? Strictly developers? Hybrids?
Written PL/SQL that was release to Prod?
Who has never had anything go wrong in that production PL/SQL?
When things do go wrong, how long does it take to diagnose the problem?
Becoming Aware there is a Problem
Options without Instrumentation
Options with Instrumentation
Proactively monitor and analyze:
Finding the Problem Source
Monitor & Trace