Overview¶
Ccrawl is composed of several python modules and sub-packages:
Module
main
defines all commands of the ccrawl command line tool. All commands are handled by the python click package which deals with options and arguments.Module
core
defines all internal classes used to represent collected objects, ie. typedefs, structs, unions, C++ classes, functions’ prototypes, macros, etc.Module
parser
implements ccrawl’s interface to the clang compiler by relying on the libclang python package. This interface allows to collect C/C++ definitions from input (header) files inside a database.Module
db
defines ccrawl’s interface with the underlying databases, allowing to query for various properties of stored C/C++ definitions and to output the full definition of a chosen structure (all needed types recusively) or to instanciate an “external” object associated with the chosen definition (for example a ctypes instance, an amoco struct or a ghidra data type instance.)Module
conf
provides the global configuration based on the traitlets package.Module
graphs
provides the Node, Link and CGraph classes that are used to encode the dependency graph of a type and locate cyclic dependencies in this graph.Module
utils
implements the pyparsing utilities for decomposing a C/C++ type into a ccrawl object.Module
graphs
implements the graph classes (Node, Link, CGraph) for computing the dependency-graph of a set of types.Sub-packages formatters deals with translating the queried definitions into a specific language (raw, C/C++, amoco, ctypes, etc) and sub-package ext deals with instanciating the queried definitions into specific python tools (amoco, ctypes, Ghidra bridge, etc.)