class SemanticLogger::Appender::MongoDB

logger.info ‘This message is written to mongo as a document’
# Log some messages
logger = SemanticLogger[‘Example’]

SemanticLogger.add_appender(appender: appender)
)
collection_size: 1024**3 # 1.gigabyte
db: database,
appender = SemanticLogger::Appender::MongoDB.new(
database = client[‘test’]
client = Mongo::MongoClient.new
require ‘semantic_logger’
Example:
line_number: 42
file_name: ‘my_class.rb’
# When a backtrace is captured
}
stack_trace: []
message: ‘Invalid value’,
name: ‘MyException’,
exception: {
tags: [“id1”, “id2”]
duration_ms: ms,
duration: ‘human readable duration’,
message: “Message supplied to the logging call”,
level_index: 0|1|2|3|4|5
level: ‘trace|debug|info|warn|error|fatal’
name: “com.clarity.MyClass”,
thread: “name or id of thread”,
pid: process id
application ‘Name of application or service logging the data - clarity_base, nginx, tomcat’,
host: ‘Name of the host on which this log entry originated’,
time: ISODate(“2011-04-06T19:19:27.006Z”),
_id: ObjectId(“4d9cbcbf7abb3abdaf9679cd”),
Mongo Document Schema:
The Mongo Appender for the SemanticLogger

def create_indexes

Creates an index based on tags to support faster searches.

* A find will always return the documents in their insertion order
* Documents are always stored in insertion order
* Document updates cannot make them any larger
* Documents cannot be deleted,
* No indexes by default (not even on _id)
Features of capped collection:

Create the required capped collection.
def create_indexes
  # Create Capped collection
  begin
    @collection.create
  rescue Mongo::Error::OperationFailure
    nil
  end
  @collection.indexes.create_one(tags: 1)
end

def default_formatter

def default_formatter
  SemanticLogger::Formatters::Raw.new
end

def initialize(uri:,

Default: SemanticLogger.application
Name of this application to appear in log messages.
application: [String]

Default: SemanticLogger.host
Name of this host to appear in log messages.
host: [String]

The Proc must return true or false.
Proc: Only include log messages where the supplied Proc returns true
regular expression. All other messages will be ignored.
RegExp: Only include log messages where the class name matches the supplied.
filter: [Regexp|Proc]

Default: Use the built-in formatter (See: #call)
the output from this appender
An instance of a class that implements #call, or a Proc to be used to format
formatter: [Object|Proc|Symbol]

Default: SemanticLogger.default_level
Override the log level for this appender.
level: [:trace | :debug | :info | :warn | :error | :fatal]

Default: no max limit
Maximum number of log entries that the capped collection will hold.
collection_max: [Integer]

Release: 4GB
Test: File
Dev: .5GB
Prod: 25GB (.5GB per day across 4 servers over 10 days)
Examples:
Default: 1 GB
The size of the MongoDB capped collection to create in bytes
collection_size: [Integer]

Default: 0
see: http://docs.mongodb.org/manual/reference/write-concern/
Write concern to use
write_concern: [Integer]

Default: semantic_logger
Name of the collection to store log data in
collection_name: [String]

mongodb://127.0.0.1:27017/test
Example:
Mongo connection string.
uri: [String]
Parameters:

Create a MongoDB Appender instance
def initialize(uri:,
               collection_name: "semantic_logger",
               write_concern: 0,
               collection_size: 1024**3,
               collection_max: nil,
               **args,
               &block)
  @client          = Mongo::Client.new(uri, logger: logger)
  @collection_name = collection_name
  @options         = {
    capped: true,
    size:   collection_size,
    write:  {w: write_concern}
  }
  @options[:max] = collection_max if collection_max
  reopen
  # Create the collection and necessary indexes
  create_indexes
  super(**args, &block)
end

def log(log)

Log the message to MongoDB
def log(log)
  # Insert log entry into Mongo
  collection.insert_one(formatter.call(log, self))
  true
end

def purge_all

Also useful when the size of the capped collection needs to be changed
and recreating it.
Purge all data from the capped collection by dropping the collection
def purge_all
  collection.drop
  reopen
  create_indexes
end

def reopen

open the handles to resources
After forking an active process call #reopen to re-open
def reopen
  @collection = client[@collection_name, @options]
end