Simple Thread-safe Logger in C++
A very simple, small, thread-safe and stream-styled logger written in C++.
A thread-safe logger is crucial for the correct functioning and maintainability of multi-threaded applications. It ensures data integrity, prevents race conditions, and provides reliable log records for debugging and troubleshooting. Without the thread safety, multiple threads can attempt to log message simultaneously which can lead to interleaved messages and data corruption.
Thread Safety
The std::atomic<T>
provides an atomic type that different threads can simultaneously operate on without raising undefined behavior.
Also, std::atomic<T>
gives you more control by allowing various memory orders that specify synchronization and ordering constraints.
Simple multi-threaded data access
In a multi-threaded scenario, we have int A=B=0;
.
In thread-1:
1
2
A = 1;
print(A);
In thread-2:
1
2
B = 2;
print(B);
Execute in sequence
The simplest case is the two threads execute in sequence, that is, after one thread completes execution, the instructions of the other thread are executed. In this case, there are two possibilities:
1
2
3
4
A = 1;
print(A);
B = 2;
print(B);
The output will be: 01
1
2
3
4
B = 2;
print(B);
A = 1;
print(A);
The output will be: 02
Execute alternately
1
2
3
4
A = 1;
B = 2;
print(A);
print(B);
In this case, it will print 12
1
2
3
4
A = 1;
B = 2;
print(B);
print(A);
The output will be 21
print 00
In addition to the above cases, there is another possibility of printing 00, but this kind of output is unlikely to occur under normal circumstances.
Happen-before Rule
The happens-before rule is a set of rules that define the order and visibility of actions in a program, especially in multi-threaded applications. This happens-before relationship ensures that there is a consistent order among operations. If operation A happens-before B, then the memory effects of A effectively become visible to the thread performing B before B is performed.
If we want an output of “00”, the operation print(A);
must happen-before A = 1;
. Samely, the operation print(B)
should happen-before B = 2;
.
However, the order of execution must be in the program order. Obviously, it is impossible to print “00”.
C++ Memory ordering
Memory order can be specified using the following enumeration:
1
2
3
4
5
6
7
8
9
10
namespace std{
typedef enum memory_order{
memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
}memory_order;
}
The default for atomic variables is memory_order_seq_cst
.
-
memory_order_seq_cst: A load operation with this memory order performs an acquire operation, a store performs a release operation, and read-modify-write performs both an acquire operation and a release operation
-
memory_order_acq_rel: A read-modify-write operation with this memory order is both an acquire operation and a release operation. No memory reads or writes in the current thread can be reordered before the load, nor after the store. All writes in other threads that release the same atomic variable are visible before the modification and the modification is visible in other threads that acquire the same atomic variable
-
memory_order_relaxed: No synchronization or ordering constraints imposed on other reads or writes, only this operation’s atomicity is guaranteed
-
memory_order_consume: A load operation with this memory order performs a consume operation on the affected memory location: no reads or writes in the current thread dependent on the value currently loaded can be reordered before this load. Writes to data-dependent variables in other threads that release the same atomic variable are visible in the current thread. On most platforms, this affects compiler optimizations only
-
memory_order_acquire: A load operation with this memory order performs the acquire operation on the affected memory location: no reads or writes in the current thread can be reordered before this load. All writes in other threads that release the same atomic variable are visible in the current thread
-
memory_order_release: A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (see Release-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic
Sequential Consistency
Sequential Consistency(SC) is the simplest memory order.
“… the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.” – Leslie Lamport, “How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs”, IEEE Trans. Comput. C-28,9 (Sept. 1979), 690-691.
In summary, it follows two rules:
-
The execution order of each processor is the same as the program order.
-
All processors can only see one single order of execution.
When we add write cache to the core, some situations that are impossible under SC model become possible. It is called the Total Store Ordering.
We still use the example mentioned above:
In thread-1:
1
2
A = 1;
print(A);
In thread-2:
1
2
B = 2;
print(B);
This time, threads write the new values of A and B into the cache and then returns immediately. It has not been updated to the memory yet. When the print();
is called, the proccessor can only access the origin values of A and B, so the output can be 00
.
Relaxed Memory Models
Neither of the above two memory models(SC & TSO) changes the order of execution when a sigle thread executes. However the Relaxed Memory Model to be discussed here changes the order.
In the relaxed memory models, instructions can be reordered by the compiler while satisfying the final result of the single thread.
For a simple example, we use gcc to generate the assembly code of following code:
1
2
3
4
5
6
7
8
9
10
int A, B;
void func(){
A = B + 1;
B = 0;
}
int main(){
func();
return 0;
}
1
gcc test.c -S
The test.s
will be like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
.file "test.cpp"
.text
.globl A
.bss
.align 4
.type A, @object
.size A, 4
A:
.zero 4
.globl B
.align 4
.type B, @object
.size B, 4
B:
.zero 4
.text
.globl _Z4funcv
.type _Z4funcv, @function
_Z4funcv:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl B(%rip), %eax
addl $1, %eax
movl %eax, A(%rip)
movl $0, B(%rip)
nop
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE0:
.size _Z4funcv, .-_Z4funcv
.globl main
.type main, @function
main:
.LFB1:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
call _Z4funcv
movl $0, %eax
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
.LFE1:
.size main, .-main
.ident "GCC: (Debian 14.2.0-8) 14.2.0"
.section .note.GNU-stack,"",@progbits
Let’s make it simple and focus only on the following instructions:
1
2
3
4
movl B(%rip), %eax
addl $1, %eax
movl %eax, A(%rip)
movl $0, B(%rip)
In these four lines of instructions, B will be firstly assigned to register eax
, then the 1-added eax
is assigned to A, finally set B to 0.
If we add -O2 to optimize(gcc test.c -S -O2
), the assembly code will be different:
1
2
3
4
movl B(%rip), %eax
movl $0, B(%rip)
addl $1, %eax
movl %eax, A(%rip)
After assigning to eax
, the B is immediately set to 1.
This shows that as long as the B is temporarily stored, the movl
operations can be reordered without changing the final execution result.
std::atomic
In multi-threaded programming, synchronization problems between threads are unavoidable. Traditional synchronization methods, such as mutex
and condition variables, may lead to performance degradation and deadlocks. C++11 introduces atomic
operations, providing a more efficient and safe multi-threaded programming method. This article will introduce the concept, usage and examples of atomic operations in C++.
std::atomic_flag
std::atomic_flag
is the simplest atomic type, with only two states: set
and clear
. std::atomic_flag
cannot be copied and assigned, and must be initialized using the ATOMIC_FLAG_INIT
macro.
1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>
#include <atomic>
using namespace std;
int main(){
std::atomic_flag flag = ATOMIC_FLAG_INIT;
cout << flag.test_and_set() << endl;
cout << flag.test_and_set() << endl;
cout << endl;
flag.clear();
return 0;
}
Think of flag
as a boolean. Through this example, we found that test_and_set()
can actually be regarded as two actions: test the flag
first, that is, get the current status of flag
as the return value, and then set it to true
. These two actions cannot be interrupted in the middle. clear()
sets flag
to false
.
std::atomic
std::atomic
is a general atomic type that can be used for any copyable type T
. std::atomic
can be initialized using the default constructor, copy constructor or assignment operation.
1
2
std::atomic<int> atomicInt(0);
std::atomic<bool> atomicBool(true);
atomic operations
Basic atomic operations include load()
, store()
, exchange()
, etc. load()
can be used to read the value of an atomic variable, store()
is used to set the value and the exchange()
function atomically replaces the current value of an atomic variable with a new value and returns the old value.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#include <atomic>
#include <iostream>
int main(){
std::atomic<int> atomicInt(0);
int value = atomicInt.load(); // get value
std::cout << "value: " << value << std::endl;
atomicInt.store(1); // set value
std::cout << "value: " << atomicInt.load() << std::endl;
int old_value = atomicInt.exchange(2);
std::cout << "old_value: " << old_value << ", new_value: " << atomicInt.load() << std::endl;
return 0;
}
Some simple arithmetic operations are atomically implemented by fetch_add
, fetch_sub
, fetch_and
, fetch_or
, fetch_xor
and etc.
1
2
3
std::atomic<int> counter(0);
counter.fetch_add(1);
counter.fetch_sub(1);
In computer science, compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization. std::atomic
also offers simple CAS supports like compare_exchange_weak
and compare_exchange_strong
. CAS contains an expected value and a new value, by comparing the current value with the expected value, CAS returns a boolean to tell whether the two values are the same. If they are the same, the new value will be stored, otherwise, the expected value will be set to the current value.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#include <atomic>
#include <iostream>
int main(){
std::atomic<int> atomicInt(0);
int expected = 0;
bool success = atomicInt.compare_exchange_weak(expected, 10);
std::cout << "success: " << success
<< ", expected: " << expected
<< ", value: " << atomicInt.load()
<< std::endl;
expected = 0;
success = atomicInt.compare_exchange_strong(expected, 20);
std::cout << "success: " << success
<< ", expected: " << expected
<< ", value: " << atomicInt.load()
<< std::endl;
return 0;
}
The key difference between compare_exchange_strong
and compare_exchange_weak
lies in their behavior regarding spurious failures. compare_exchange_weak
may fail spuriously and compare_exchange_strong
guarantees no spurious failures. Use compare_exchange_weak
when efficiency is critical and you can tolerate the possibility of spurious failures or you can easily handle spurious failures in the code.
SpinLock with std::atomic
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class SpinLock{
public:
SpinLock(std::atomic_flag& flag) : flag(flag){
while(flag.test_and_set(std::memory_order_acquire));
}
~SpinLock(){
flag.clear(std::memory_order_release);
}
private:
std::atomic_flag& flag;
};
std::atomic_flag flag;
SpinLock spinlock(flag);
Right-Value References
Moving objects is generally much faster than copying them, especially for large objects or objects with complex internal structures.
A right-value reference is a special kind of reference that can only bind to rvalues.
The primary purpose of right-value references is to enable move semantics.
Move semantics allows efficient transfer of resources (like memory) from one object to another without the need for expensive copying operations.
When an object is moved, its resources are transferred to the new object, leaving the original object in a valid but unspecified state 1 (often empty).
WIth the help of Right-Value references, we can implement a perfect forwarding:
1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>
template <typename T>
void forward_function(T&& param) {
func(std::forward<T>(param));
// perfect forward to func
}
int main() {
int x = 10;
forward_function(x);
forward_function(std::move(x));
}
Queue Buffer
BufferBase
is an abstracted base class defined some basic operations of a buffer. push(LogLine&& logline)
pushes a log line to the buffer and pop(LogLine& logline)
popes it out. Use virtual destructors to ensure that derived classes are destructed correctly.
1
2
3
4
5
6
class BufferBase{
public:
virtual ~BufferBase() = default;
virtual void push(LogLine&& logline) = 0;
virtual bool pop(LogLine& logline) = 0;
};
Buffer
is based on BufferBase
and implements specific buffer functions. Item
is used to store log lines. Each item
has a char padding
contains 256 padding bytes to ensure memory alignment.
write_state
records the state of each position.
static constexpr const size_t size = 32768;
defines the limit of this buffer(4 * 8 * 1024).
During the initialization phase, only the current thread is accessing the write_state
array. There is no situation where multiple threads are accessing the same memory location at the same time, in this condition, memory_order_relaxed
offers a more efficient way to initialize the write_data
. Although a relaxed model is used, it does not mean that there is no order guarantee at all. Within the same thread, these initialization operations will still be executed in program order.
The memory_order_acquire
ensures that before executing this atomic operation, all previous read and write operations have completed and are visible to subsequent read operations. Which means, before the fetch_add
operation is executed, the write operations to other positions in the write_state
have been completed and are visible to subsequent read operations.
Copy constructor and assignment operator are disabled to avoid possible memory problems.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class Buffer{
public:
struct Item{
char padding[256 - sizeof(LogLine)];
LogLine logline;
Item(LogLine&& logline) : logline(std::move(logline)){}
};
static constexpr const size_t size = 32768;
Buffer() : items(static_cast<Item*>(std::malloc(size * sizeof(Item)))){
for(size_t i = 0; i <= size; i++) write_state[i].store(0, std::memory_order_relaxed);
static_assert(sizeof(Item) == 256);
}
~Buffer(){
unsigned int write_cnt = write_state[size].load();
for(size_t i = 0; i < write_cnt; i++) items[i].~Item();
std::free(items);
}
bool push(LogLine&& logline, const unsigned int write_index){
new(&items[write_index]) Item(std::move(logline));
write_state[write_index].store(1, std::memory_order_release);
return write_state[size].fetch_add(1, std::memory_order_acquire) + 1 == size;
}
bool pop(LogLine& logline, const unsigned int read_index){
if(write_state[read_index].load(std::memory_order_acquire)){
Item& item = items[read_index];
logline = std::move(item.logline);
return true;
}
return false;
}
Buffer(const Buffer&) = delete;
Buffer& operator=(const Buffer&) = delete;
private:
Item *items;
std::atomic<unsigned int>write_state[size + 1]; // write_state[size]: write count
};
QueueBuffer
offers a thread-safe queue buffer for log lines.
buffers
is a std::queue<std::unique_ptr<Buffer>>
type used to store smart pointers of multiple Buffer
objects.
w_cursor
points to the currently writable buffer using atomic operations.
write_index
is a std::atomic<unsigned int>
type that uses atomic operations to record the write index in the current writable buffer.
The create_buffer()
function creates a new buffer and sets it as a writable buffer. When a thread updates w_cursor
to the address of a new buffer, it wants other threads to see this update as soon as possible. std::memory_order_release
ensures that all memory operations before performing this storage operation are visible to subsequent read operations. All memory operations preceding this store
operation are visible to subsequent read operations. That is, when other threads read the new w_cursor value
, they must also see the write to the data in next_wbuffer
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
class QueueBuffer : public BufferBase{
public:
QueueBuffer() : r_cursor{nullptr}, write_index(0), flag{ATOMIC_FLAG_INIT}, read_index(0){
create_buffer();
}
void push(LogLine&& logline) override{
unsigned int windex = write_index.fetch_add(1, std::memory_order_relaxed);
if(windex < Buffer::size){
if(w_cursor.load(std::memory_order_acquire) -> push(std::move(logline), windex)){
create_buffer();
}
}else{
while(write_index.load(std::memory_order_acquire) >= Buffer::size);
// wait until buffer is available
push(std::move(logline));
}
}
bool pop(LogLine& logline) override{
if(r_cursor == nullptr) r_cursor = get_rbuffer();
Buffer *rcursor = r_cursor; // avoid race conditions
if(rcursor == nullptr) return false;
if(rcursor -> pop(logline, read_index)){
read_index++;
if(read_index == Buffer::size){
read_index = 0;
r_cursor = nullptr;
SpinLock spinlock(flag);
buffers.pop();
}
return true;
}
return false;
}
QueueBuffer(const QueueBuffer&) = delete;
QueueBuffer& operator=(const QueueBuffer&) = delete;
// disable copy constructor
private:
std::queue<std::unique_ptr<Buffer>>buffers;
std::atomic<Buffer*>w_cursor; //current write buffer
Buffer* r_cursor; //current read buffer
std::atomic<unsigned int>write_index;
unsigned int read_index;
std::atomic_flag flag;
void create_buffer(){
std::unique_ptr<Buffer>next_wbuffer(new Buffer());
w_cursor.store(next_wbuffer.get(), std::memory_order_release);
SpinLock spinlock(flag);
buffers.push(std::move(next_wbuffer));
write_index.store(0, std::memory_order_relaxed);
}
Buffer* get_rbuffer(){
SpinLock spinlock(flag);
return buffers.empty() ? nullptr : buffers.front().get();
}
};
Logger Class
State
enum defines three possible states for the logger: INIT
(during initialization), ENABLED
(ready to log messages), and DISABLED
(shutting down).
pop()
is designed to run in a separate thread within the Logger
class. std::memory_order_acquire
is crucial here. It ensures that any memory accesses that happened before the state
was set to ENABLED
are visible to this thread. This prevents the pop
thread from starting to process log messages before the Logger
object is fully initialized (including the buffer_queue
and file_writer
). The second loop, while(buffer_queue -> pop(logline)) file_writer.write(logline);
, which runs after the state
has transitioned to State::DISABLED
is designed to read and write any remaining log lines that might still be present in the buffer_queue
before the logger is completely shut down. This offers a graceful shutdown which ensures that no log lines are lost even if the state
is changed to DISABLED
while there are still messages in the buffer.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class Logger{
public:
Logger(const std::string& dir, const std::string& filename, uint32_t roll_size)
: state(State::INIT),
buffer_queue(new QueueBuffer()),
file_writer(dir, filename, std::max(1u, roll_size)),
thread(&Logger::pop, this){
state.store(State::ENABLED, std::memory_order_release);
}
~Logger(){
state.store(State::DISABLED);
thread.join(); // Waits for the thread to finish execution
}
void add(LogLine&& logline){
buffer_queue -> push(std::move(logline));
}
void pop(){
while(state.load(std::memory_order_acquire) == State::INIT);
// wait until constructor is finished
LogLine logline(LogSeverity::INFO, nullptr, nullptr, 0);
while(state.load(std::memory_order_seq_cst) == State::ENABLED){
if(buffer_queue -> pop(logline)) file_writer.write(logline);
}
// read remaining log
while(buffer_queue -> pop(logline)) file_writer.write(logline);
}
private:
enum class State{
INIT,
ENABLED,
DISABLED,
};
std::atomic<State>state;
std::unique_ptr<BufferBase>buffer_queue;
FileWriter file_writer;
std::thread thread;
};
Init a logger
1
2
3
4
5
6
7
std::unique_ptr<Logger>logger;
std::atomic<Logger*>atomic_logger;
void init(const std::string& dir, const std::string filename, uint32_t roll_size){
logger.reset(new Logger(dir, filename, roll_size));
atomic_logger.store(logger.get(), std::memory_order_seq_cst);
}
std::memory_order_seq_cst
enforces the strongest ordering guarantees, ensuring that all memory accesses performed before the atomic operation are visible to all threads in the program in the order they were issued. The init
function might be called from multiple threads concurrently. Without seq_cst
order, another thread could potentially see an inconsistent state of the atomic_logger
.
Severity Level
Use enum class LogSeverity
to represent severity levels.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#ifndef BASE_LOG_SEVERITY_H__
#define BASE_LOG_SEVERITY_H__
#include <cstdint>
namespace slog{
enum class LogSeverity : uint8_t {
DEBUG,
INFO,
ERROR,
WARN,
FATAL
};
}
#endif
Log Line Time
Use std::chrono
to porovide precise timestamp information for logging.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#ifndef SLOGTIME_H
#define SLOGTIME_H
#include <chrono>
namespace slogtime{
class LogLineTime{
public:
LogLineTime();
explicit LogLineTime(std::chrono::system_clock::time_point now);
const std::chrono::system_clock::time_point& when() const noexcept{return timestamp;}
int sec() const noexcept{return tm_.tm_sec;}
int usec() const noexcept{return usecs.count();}
int min() const noexcept{return tm_.tm_min;}
int hour() const noexcept{return tm_.tm_hour;}
int day() const noexcept{return tm_.tm_mday;}
int month() const noexcept{return tm_.tm_mon + 1;}
int year() const noexcept{return tm_.tm_year + 1900;}
int dayOfWeek() const noexcept{return tm_.tm_wday;}
int dayInYear() const noexcept{return tm_.tm_yday;}
int dst() const noexcept{return tm_.tm_isdst;}
std::chrono::seconds gmtoffset() const noexcept{return gmtoffset_;}
const std::tm& tm() const noexcept{return tm_;}
private:
std::tm tm_{}; // time of creation of LogLine
std::chrono::system_clock::time_point timestamp;
std::chrono::microseconds usecs;
std::chrono::seconds gmtoffset_;
};
}
#endif // SLOGTIME_H
1
2
3
4
5
6
7
8
9
10
11
12
13
14
LogLineTime::LogLineTime() : LogLineTime(std::chrono::system_clock::now()) {}
LogLineTime::LogLineTime(std::chrono::system_clock::time_point now) : timestamp(now) {
time_t tt = std::chrono::system_clock::to_time_t(now);
gmtoffset_ = std::chrono::seconds(std::localtime(&tt)->tm_gmtoff);
std::tm* ptm = std::localtime(&tt);
tm_ = *ptm;
usecs = std::chrono::duration_cast<std::chrono::microseconds>(now - std::chrono::system_clock::from_time_t(tt));
}
slogtime::LogLineTime now;
int hour = now.hour();
int minute = now.min();
std::cout << "Current time: " << hour << ":" << minute << std::endl;
Stream to String
reinterpret_cast
reinterpret_cast
is a cast operator that performs a low-level, non-portable reinterpretation of the bit pattern of an object. It essentially treats the memory occupied by one data type as if it held a different data type. This means it doesn’t perform any type conversions or checks. We usually use reinterpret_cast
to cast between different pointer types. In some low-level operations such as accessing raw memories(e.g., memory-mapped I/O), memory management, reinterpret_cast
is also widely used. However, incorrect use can lead to undefined behavior, crashes, and security vulnerabilities. The behavior of reinterpret_cast
can be platform-specific due to differences in memory representations (e.g., endianness).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
void LogLine::stream_to_string(std::ostream& s){
char* data = !heap_buffer ? stack_buffer : heap_buffer.get();
const char* const end = data + used_bytes;
slogtime::LogLineTime timenow = *reinterpret_cast<slogtime::LogLineTime*>(data);
data += sizeof(slogtime::LogLineTime);
std::thread::id threadid = *reinterpret_cast<std::thread::id*>(data);
data += sizeof(std::thread::id);
string_literal_t file = *reinterpret_cast<string_literal_t*>(data);
data += sizeof(string_literal_t);
string_literal_t function = *reinterpret_cast<string_literal_t*>(data);
data += sizeof(string_literal_t);
uint32_t line = *reinterpret_cast<uint32_t*>(data);
data += sizeof(uint32_t);
LogSeverity loglevel = *reinterpret_cast<LogSeverity*>(data);
data += sizeof(LogSeverity);
s << '[' << timenow.year() << '-' \
<< timenow.month() << '-' \
<< timenow.day() << '-' \
<< timenow.hour() << timenow.min() << timenow.sec();
s << '[' << level_to_string(loglevel) << ']'
<< '[' << threadid << ']'
<< '[' << file.s
<< ':' << function.s
<< ':' << line << "] ";
stream_to_string(s, data, end);
s << std::endl;
if (loglevel == LogSeverity::FATAL) {
s.flush();
}
}
Decode
Generic Template:
1
2
3
4
5
6
template<typename T>
char* decode(std::ostream& s, char* data, T* dummy){
T arg = *reinterpret_cast<T*>(data);
s << arg;
return data + sizeof(T);
}
T* dummy
is a pointer to a dummy object of type T
. This parameter is used to provide type information to the compiler during template instantiation.
Specialization for LogLine::string_literal_t
:
1
2
3
4
5
6
template<>
char* decode(std::ostream& s, char* data, LogLine::string_literal_t* dummy){
LogLine::string_literal_t sliteral = *reinterpret_cast<LogLine::string_literal_t*>(data);
s << sliteral.s;
return data + sizeof(LogLine::string_literal_t);
}
s << sliteral.s;
writes the actual string (sliteral.s
) to the output stream.
Specialization for char*
:
1
2
3
4
5
6
7
8
template<>
char* decode(std::ostream& s, char* data, char** dummy){
while(*data != '\0'){
s << *data;
++data;
}
return ++data;
}
return ++data;
returns a pointer to the position after the null terminator.
Types Definition
All the types to be encoded/decoded is defined in a tuple:
1
typedef std::tuple<char, char*, int32_t, int64_t, uint32_t, uint64_t, double, LogLine::string_literal_t> DataTypes;
Tuple Helper
A tuple helper is used to locate different types in the tuple:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
template<typename T, typename Tuple>
struct TupleIndexHelper;
template <typename T>
struct TupleIndexHelper<T, std::tuple<>>{
static constexpr const std::size_t value = 0;
};
template<typename T, typename...Types>
struct TupleIndexHelper<T, std::tuple<T, Types...>>{
static constexpr std::size_t value = 0;
};
template<typename T, typename U, typename...Types>
struct TupleIndexHelper<T, std::tuple<U, Types...>>{
static constexpr std::size_t value = 1 + TupleIndexHelper<T, std::tuple<Types...>>::value;
}; // recursive steps to find Type index
stream_to_string
Recursive
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
void LogLine::stream_to_string(std::ostream& s, char* start, const char* const end){
if(start == end) return;
int id = static_cast<int>(*start);
start++;
switch(id){
case 0:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<0, DataTypes>::type*>(nullptr)), end);
return;
case 1:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<1, DataTypes>::type*>(nullptr)), end);
return;
case 2:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<2, DataTypes>::type*>(nullptr)), end);
return;
case 3:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<3, DataTypes>::type*>(nullptr)), end);
return;
case 4:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<4, DataTypes>::type*>(nullptr)), end);
return;
case 5:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<5, DataTypes>::type*>(nullptr)), end);
return;
case 6:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<6, DataTypes>::type*>(nullptr)), end);
return;
case 7:
stream_to_string(s, decode(s, start, static_cast<std::tuple_element<7, DataTypes>::type*>(nullptr)), end);
return;
}
}
Macros
Log Macros
operator+=
is used to add one log line to Slog
.
std::memory_order_acquire
ensures that the acquired pointer is up-to-date and that any operations on the logger after acquiring the pointer will be performed in order.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
struct Slog{
bool operator+=(LogLine& logline);
};
bool Slog::operator+=(LogLine& logline){
atomic_logger.load(std::memory_order_acquire) -> add(std::move(logline));
return true;
}
#define SLOG(LEVEL) slog::Slog() += slog::LogLine(LEVEL, __FILE__, __func__, __LINE__)
#define LOG_DEBUG SLOG(slog::LogSeverity::DEBUG)
#define LOG_INFO SLOG(slog::LogSeverity::INFO)
#define LOG_ERROR SLOG(slog::LogSeverity::ERROR)
#define LOG_WARN SLOG(slog::LogSeverity::WARN)
#define LOG_FATAL SLOG(slog::LogSeverity::FATAL)
Log messages with a stream style:
1
2
LOG_INFO << info << 123 << 12.34;
LOG_FATAL << "fatal error: " << str;
Assert Macros
In most cases, assert functions can be implemented using macros.
1
2
3
4
#define CHECK(condition) \
if (!(condition)){ \
LOG_FATAL << "CHECK failed: " << #condition; \
}
Use std::abort();
to abort the program. An error will be shown:
1
zsh IOT instruction (core dumped)
Check pointer:
1
2
3
4
#define CHECK_P(ptr) \
if ((ptr) == nullptr){ \
LOG_WARN<< "CHECK_P failed: pointer " << #ptr << " is null"; \
}
Check string:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
bool strcasecmp(const char* s1, const char* s2) {
while (*s1 && *s2) {
if (tolower(*s1) != tolower(*s2)) {
return false;
}
s1++;
s2++;
}
return *s1 == *s2;
}
#define CHECK_STREQ(str1, str2) \
if (strcmp(str1, str2) != 0) { \
LOG_WARN << "CHECK_STREQ failed: \"" << str1 << "\" != \"" << str2 << "\""; \
}
#define CHECK_STREQ_CASE(str1, str2) \
if (!strcasecmp(str1, str2)) { \
LOG_WARN << "CHECK_STREQ_CASE failed: \"" << str1 << "\" != \"" << str2 << "\""; \
}
Test the logger
Test the logger in a simple multi-thread environment:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#include "include/slog.h"
#include <iostream>
#include <string>
#include <vector>
#include <ctime>
void benchmark(){
const char* const str = "benchmark";
auto begin = std::chrono::high_resolution_clock::now();
for(int i = 0; i < 100000; i++){
LOG_INFO << "Logging-" << i << "-double-" << -99.876 << "-uint64-" << (uint64_t)i;
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
long int avg = duration.count() / 100000;
printf("Avg: %ld\n", avg);
printf("Total: %ld\n", duration.count());
}
template<typename F>
void create_thread(F&& f, int cnt){
std::vector<std::thread>threads;
for(int i = 0; i < cnt; i++){
threads.emplace_back(f);
}
for(int i = 0; i < cnt; i++){
threads[i].join();
}
}
int main(){
slog::init("/tmp/log/", "log", 8);
for(auto threads:{1,2,3}){
create_thread(benchmark, threads);
}
LOG_INFO << "HELLO";
int a=1;
int b=2;
CHECK_EQ_F(1,2);
return 0;
}