module SemanticLogger

def self.[](klass)

Return a logger for the supplied class or class_name
def self.[](klass)
  Logger.new(klass)
end

def self.add_appender(**args, &block)

logger.debug("Login time", user: 'Joe', duration: 100, ip_address: '127.0.0.1')
logger.info "Hello World"
logger = SemanticLogger['Example']

SemanticLogger.add_appender(logger: log)
SemanticLogger.default_level = :debug

log.level = Logger::DEBUG
log = Logger.new($stdout)
# Built-in Ruby logger

require 'semantic_logger'
require 'logger'
# Send Semantic logging output to an existing logger

Log to log4r, Logger, etc.:

SemanticLogger.add_appender(io: $stdout, level: :info)
SemanticLogger.add_appender(file_name: 'logfile.log')
# Send all logging output to a file and only :info and above to standard output

SemanticLogger.add_appender(file_name: 'logfile.log')
# Send all logging output to a file

SemanticLogger.add_appender(io: $stdout)
# Send all logging output to Standard Out (Screen)

Examples:

The Proc must return true or false.
Proc: Only include log messages where the supplied Proc returns true
regular expression. All other messages will be ignored.
RegExp: Only include log messages where the class name matches the supplied.
filter: [Regexp|Proc]

Default: :default
A Proc to be used to format the output from this appender
Or,
An instance of a class that implements #call
Or,
Any of the following symbol values: :default, :color, :json, :logfmt, etc...
formatter: [Symbol|Object|Proc]

Default: SemanticLogger.default_level
Override the log level for this appender.
level: [:trace | :debug | :info | :warn | :error | :fatal]

An instance of a Logger or a Log4r logger.
logger: [Logger|Log4r]
Or,

SemanticLogger::Appender::Http.new(url: 'http://localhost:8088/path')
For example:
An instance of an appender derived from SemanticLogger::Subscriber
Or,
:bugsnag, :elasticsearch, :graylog, :http, :mongodb, :new_relic, :splunk_http, :syslog, :wrapper
For example:
A symbol identifying the appender to create.
appender: [Symbol|SemanticLogger::Subscriber]
Or,

For example $stdout, $stderr, etc.
An IO Stream to log to.
io: [IO]
Or,

File name to write log messages to.
file_name: [String]
Parameters

more information on custom formatters
of the messages sent to that appender. See SemanticLogger::Logger.new for
If a block is supplied then it will be used to customize the format

Appenders will be written to in the order that they are added

emitted from Semantic Logger
Add a new logging appender as a new destination for all log messages
def self.add_appender(**args, &block)
  appender = appenders.add(**args, &block)
  # Start appender thread if it is not already running
  Logger.processor.start
  appender
end

def self.add_signal_handler(log_level_signal = "USR2", thread_dump_signal = "TTIN", gc_log_microseconds = 100_000)

Set gc_log_microseconds to nil to not enable JRuby Garbage collections
To only register one of the signal handlers, set the other to nil
Note:

Currently only supported when running JRuby
that exceed the time threshold will be logged. Default: 100 ms
Also adds JRuby Garbage collection logging so that any garbage collections

Thread.current.name = 'My Worker'
calling the following from within the thread itself:
It is recommended to name any threads you create in the application, by

Java thread dump which includes system threads and Java stack traces.
For JRuby users this thread dump differs form the standard QUIT triggered

of threads to the log file, along with their back-traces when available
When the signal is raised on this process, Semantic Logger will write the list

2. Logging a Ruby thread dump

If the current level is :trace it wraps around back to :fatal

:fatal, :error, :warn, :info, :debug, :trace
from the current global default level:
rotates through the following log levels in the following order, starting
When the log_level_signal is raised on this process, the global default log level

log_level_signal, which by default is 'USR2'
The log level can be changed without restarting the process by sending the

1. Changing the log_level:

Two signal handlers will be registered by default:

Add signal handlers for Semantic Logger
def self.add_signal_handler(log_level_signal = "USR2", thread_dump_signal = "TTIN", gc_log_microseconds = 100_000)
  if log_level_signal
    Signal.trap(log_level_signal) do
      current_level_index = LEVELS.find_index(default_level)
      new_level_index = current_level_index == 0 ? LEVELS.size - 1 : current_level_index - 1
      new_level = LEVELS[new_level_index]
      self.default_level = new_level
      self["SemanticLogger"].warn "Changed global default log level to #{new_level.inspect}"
    end
  end
  if thread_dump_signal
    Signal.trap(thread_dump_signal) do
      logger = SemanticLogger["Thread Dump"]
      Thread.list.each do |thread|
        # MRI re-uses the main thread for signals, JRuby uses `SIGTTIN handler` thread.
        next if defined?(JRuby) && (thread == Thread.current)
        logger.backtrace(thread: thread)
      end
    end
  end
  if gc_log_microseconds && defined?(JRuby)
    listener = SemanticLogger::JRuby::GarbageCollectionLogger.new(gc_log_microseconds)
    Java::JavaLangManagement::ManagementFactory.getGarbageCollectorMXBeans.each do |gcbean|
      gcbean.add_notification_listener(listener, nil, nil)
    end
  end
  true
end

def self.appenders

to manipulate the active appenders list
Use SemanticLogger.add_appender and SemanticLogger.remove_appender
appenders for debugging etc.
Returns [SemanticLogger::Subscriber] a copy of the list of active
def self.appenders
  Logger.processor.appenders
end

def self.application

Note: Not all appenders use `application`
Returns [String] name of this application for logging purposes
def self.application
  @application
end

def self.application=(application)

Override the default application
def self.application=(application)
  @application = application
end

def self.backtrace_level

Returns the current backtrace level
def self.backtrace_level
  @backtrace_level
end

def self.backtrace_level=(level)

the time. It is recommended to run it at :error level in production.
Capturing backtraces is very expensive and should not be done all
Warning:

can be forwarded to error management services such as Bugsnag.
message was logged can be written to the log file. Additionally, the backtrace
By enabling backtrace capture the filename and line number of where

for every log message.
Sets the level at which backtraces should be captured
def self.backtrace_level=(level)
  @backtrace_level = level
  # For performance reasons pre-calculate the level index
  @backtrace_level_index = level.nil? ? 65_535 : Levels.index(level)
end

def self.backtrace_level_index

For internal use only
Returns the current backtrace level index
def self.backtrace_level_index
  @backtrace_level_index
end

def self.clear_appenders!

Clear out all previously registered appenders
def self.clear_appenders!
  Logger.processor.close
end

def self.close

Close all appenders and flush any outstanding messages.
def self.close
  Logger.processor.close
end

def self.default_level

Returns the global default log level
def self.default_level
  @default_level
end

def self.default_level=(level)

Sets the global default log level
def self.default_level=(level)
  @default_level = level
  # For performance reasons pre-calculate the level index
  @default_level_index = Levels.index(level)
end

def self.default_level_index

def self.default_level_index
  Thread.current[:semantic_logger_silence] || @default_level_index
end

def self.environment

Note: Not all appenders use `environment`
Returns [String] name of this environment for logging purposes
def self.environment
  @environment
end

def self.environment=(environment)

Override the default environment
def self.environment=(environment)
  @environment = environment
end

def self.fast_tag(tag)

tag api can be used for short lived tags
If the tag being supplied is definitely a string then this fast
def self.fast_tag(tag)
  return yield if tag.nil? || tag == ""
  t = Thread.current[:semantic_logger_tags] ||= []
  begin
    t << tag
    yield
  ensure
    t.pop
  end
end

def self.flush

All queued log messages are written and then each appender is flushed in turn.
Flush all queued log entries disk, database, etc.
def self.flush
  Logger.processor.flush
end

def self.host

Note: Not all appenders use `host`
Returns [String] name of this host for logging purposes
def self.host
  @host ||= Socket.gethostname.force_encoding("UTF-8")
end

def self.host=(host)

Override the default host name
def self.host=(host)
  @host = host
end

def self.lag_check_interval

to determine if the appender thread is falling behind.
Returns the check_interval which is the number of messages between checks
def self.lag_check_interval
  Logger.processor.lag_check_interval
end

def self.lag_check_interval=(lag_check_interval)

to determine if the appender thread is falling behind.
Set the check_interval which is the number of messages between checks
def self.lag_check_interval=(lag_check_interval)
  Logger.processor.lag_check_interval = lag_check_interval
end

def self.lag_threshold_s

to determine if the appender thread is falling behind.
Returns the amount of time in seconds
def self.lag_threshold_s
  Logger.processor.lag_threshold_s
end

def self.named_tagged(hash)

:nodoc
def self.named_tagged(hash)
  return yield if hash.nil? || hash.empty?
  raise(ArgumentError, "#named_tagged only accepts named parameters (Hash)") unless hash.is_a?(Hash)
  begin
    push_named_tags(hash)
    yield
  ensure
    pop_named_tags
  end
end

def self.named_tags

Returns [Hash] a copy of the named tags currently active for this thread.
def self.named_tags
  if (list = Thread.current[:semantic_logger_named_tags]) && !list.empty?
    if list.size > 1
      list.reduce({}) { |sum, h| sum.merge(h) }
    else
      list.first.clone
    end
  else
    {}
  end
end

def self.on_log(object = nil, &block)

* If these callbacks are slow they will slow down the application.
* This callback is called within the thread of the application making the logging call.
Note:

SemanticLogger.on_log(CaptureContext)
end
end
log.set_context(:honeybadger, Honeybadger.get_context)
def call(log)
module CaptureContext
Example:

end
log.set_context(:honeybadger, Honeybadger.get_context)
SemanticLogger.on_log do |log|
Example:

[Object] any object on which to call #call.
[Proc] the block to call.
object: [Object | Proc]
Parameters

Useful for capturing appender specific context information.
Supply a callback to be called whenever a log entry is created.
def self.on_log(object = nil, &block)
  Logger.subscribe(object, &block)
end

def self.pop_named_tags(quantity = 1)

def self.pop_named_tags(quantity = 1)
  t = Thread.current[:semantic_logger_named_tags]
  t&.pop(quantity)
end

def self.pop_tags(quantity = 1)

Remove specified number of tags from the current tag list
def self.pop_tags(quantity = 1)
  t = Thread.current[:semantic_logger_tags]
  t&.pop(quantity)
end

def self.push_named_tags(hash)

def self.push_named_tags(hash)
  (Thread.current[:semantic_logger_named_tags] ||= []) << hash
  hash
end

def self.push_tags(*tags)

`logger.push_tags`
- To get the flattening behavior use the slower api:
since the performance penalty is excessive.
- This method does not flatten the array or remove any empty elements, or duplicates
Note:

Add tags to the current scope
def self.push_tags(*tags)
  (Thread.current[:semantic_logger_tags] ||= []).concat(tags)
  tags
end

def self.queue_size

look into speeding up the appenders themselves
logging, increase the log level, reduce the number of appenders, or
able to write to the appenders fast enough. Either reduce the amount of
When this number grows it is because the logging appender thread is not

Returns [Integer] the number of log entries waiting to be written to the appenders.
def self.queue_size
  Logger.processor.queue.size
end

def self.remove_appender(appender)

Currently only supports appender instances
Remove an existing appender
def self.remove_appender(appender)
  return unless appender
  appenders.delete(appender)
  appender.close
end

def self.reopen

Check the code for each appender you are using before relying on this behavior.
Not all appender's implement reopen.
Note:

any open file handles etc to resources.
After forking an active process call SemanticLogger.reopen to re-open
def self.reopen
  Logger.processor.reopen
end

def self.silence(new_level = :error)

explicitly. I.e. That do not rely on the global default level
#silence does not affect any loggers which have had their log level set
Note:

end
logger.error "but errors will be logged"
logger.warn "this neither"
logger.info "this will _not_ be logged"
SemanticLogger.silence do
# Silence all logging for this thread below :error level
Example:

Default: :error
The new log level to apply within the block
new_level
Parameters

end
logger.debug "this will be logged"
SemanticLogger.silence(:trace) do

logger.debug 'this will _not_ be logged'

SemanticLogger.default_level = :info
# Perform trace level logging within the block when the default is higher

Example:

the supplied block.
#silence can be used to both raise and lower the log level within

Any threads spawned within the block will not be affected by this setting

This setting is thread-safe and only applies to the current thread

Silence noisy log levels by changing the default_level within the block
def self.silence(new_level = :error)
  current_index                            = Thread.current[:semantic_logger_silence]
  Thread.current[:semantic_logger_silence] = Levels.index(new_level)
  yield
ensure
  Thread.current[:semantic_logger_silence] = current_index
end

def self.sync!

log them using the current thread.
I.e. Instead of logging messages in a separate thread for better performance,

Run Semantic Logger in Synchronous mode.
def self.sync!
  Logger.sync!
end

def self.sync?

Running in synchronous mode?
def self.sync?
  Logger.sync?
end

def self.tagged(*tags, &block)

`logger.tagged('first', 'more', 'other')`
to the equivalent of:
`logger.tagged([['first', nil], nil, ['more'], 'other'])`
- `logger.tagged` is a slower api that will flatten the example below:
- Tags should be a list without any empty values, or contain any array.
Notes:

end
logger.debug('Hello World')
SemanticLogger.tagged(tracking_number: 12345) do
Named Tags (Hash) example:

end
logger.debug('Hello World')
SemanticLogger.tagged(12345, 'jack') do
Tagged example:

Returns result of block.

Add the tags or named tags to the list of tags to log for this thread whilst the supplied block is active.
def self.tagged(*tags, &block)
  return yield if tags.empty?
  # Allow named tags to be passed into the logger
  if tags.size == 1
    tag = tags[0]
    return tag.is_a?(Hash) ? named_tagged(tag, &block) : fast_tag(tag, &block)
  end
  begin
    push_tags(*tags)
    yield
  ensure
    pop_tags(tags.size)
  end
end

def self.tags

Returns nil if no tags are set
Returns a copy of the [Array] of [String] tags currently active for this thread
def self.tags
  # Since tags are stored on a per thread basis this list is thread-safe
  t = Thread.current[:semantic_logger_tags]
  t.nil? ? [] : t.clone
end