FCF 2.0 development in progress...
> > >
[News] [C++ Libraries API] [C++ Downloads] [Donate to the project] [Contacts]

About the fcfTest library

Here are the main features of the fcfTest unit testing library.
1 macros in all cases.

The library's main feature is the FCF_TEST macro. It doesn't just test a logical expression; it works as an intelligent debugger. When a test fails, the library automatically extracts variable names and their current values.

Example: Comparison with the traditional approach

Traditional approach (without fcfTest)

int expected = 100; int actual = calculate_value(); if (actual != expected) { std::cout << "Error: expected " << expected << " but got " << actual << std::endl; throw std::runtime_error("Test failed"); }

Approach with fcfTest

int expected = 100; int actual = calculate_value(); FCF_TEST(actual == expected, expected, actual);

If actual is 95, you will get an instant and clear report:

Test error: actual == expected [FILE: main.cpp:12] Values: expected: 100 actual: 95

Zero-Configuration: One File and You're Up and Running

fcfTest is a header-only library. You don't need to configure CMake, download binaries, or mess with linking. This makes it an ideal choice for microservices, small utilities, or embedded systems.

Just add the file to the project:

// In your main.cpp file #define FCF_TEST_IMPLEMENTATION #include <fcfTest/test.hpp> FCF_TEST_DECLARE("Core", "Math", "AdditionTest") { FCF_TEST(2 + 2 == 4); } int main(int a_argc, char* a_argv[]) { bool error; // We run all tests declared in the application fcf::NTest::cmdRun(a_argc, a_argv, fcf::NTest::CRM_RUN, &error); return error ? 1 : 0; }

Hierarchy and Order: Complete Control

In large projects, tests often depend on system state. fcfTest offers a three-level structure: Part -> Group -> Test. You can run only the required sections or specify a strict execution order.

Example: Controlling Execution Order
// Declare tests FCF_TEST_DECLARE("Database", "Connection", "InitTest") { /* ... */ } FCF_TEST_DECLARE("Database", "Query", "SelectTest") { /* ... */ } // Ensure that initialization occurs first, then queries FCF_TEST_PART_ORDER("Database", 1); FCF_TEST_GROUP_ORDER("Connection", 1); FCF_TEST_GROUP_ORDER("Query", 2);

Running via the command line allows you to filter tests without recompiling:

// Run only tests from the "Query" group ./my_tests --test-group Query // Run everything except the "Legacy" group ./my_tests --test-ignore-group Legacy

More than just tests: Logger and Benchmarking

Why bother with three different libraries when you can use one? fcfTest is a developer's all-in-one solution.

Built-in Logger

You can log test execution with different severity levels. This is useful for monitoring system state during long tests.

fcf::NTest::inf() << "Starting heavy data processing..." << std::endl; // You can add a custom prefix (e.g., time) fcf::NTest::logger().addedPrefixStr(" [LOG]: "); fcf::NTest::inf() << "Some actions have been completed" << std::endl;

Output:

Starting heavy data processing... [LOG]: Some actions have been completed

Built-in Benchmarking

The Duration class lets you measure code performance directly within tests. This turns a regular unit test into a performance verification tool.

#define FCF_TEST_IMPLEMENTATION #include <fcfTest/test.hpp> FCF_TEST_DECLARE("Performance", "Sort", "VectorSort") { fcf::NTest::Duration bench(1000); // 1000 iterations std::vector<int> sorted; bench([&sorted]() { sorted = {5, 2, 9, 1}; std::sort(sorted.begin(), sorted.end()); }); FCF_TEST(std::is_sort(sorted.begin(), sorted.end())); fcf::NTest::inf() << "Avg time: " << bench.duration().count() << " ns" << std::endl; } int main(int a_argc, char* a_argv[]) { bool error; // We run all tests declared in the application fcf::NTest::cmdRun(a_argc, a_argv, fcf::NTest::CRM_RUN, &error); return error ? 1 : 0; }