[GiNaC-devel] Faster unarchiving of large equation systems

Kemlath Orcslayer kemlath at googlemail.com
Thu Dec 6 07:29:53 CET 2018


Dear All,

I’m using GiNaC for automatic derivation of the Jacobian for very large non-linear equation systems for use in the Sundials suite of solvers.

The need for parallelisation arose and I used MPI for the job since GiNaC is not well suited for multi threading due to its ref counting scheme. So I setup appropriate MPI broadcasting code for GiNaC ex() objects using the available archive classes in GiNaC and boost based memory stream IO.

All went well except for the unarchiving performance….
The problem lies in symbol::read_archive where for each unarchived symbol the list of ALL symbols is searched for a symbol of the same name:

void symbol::read_archive(const archive_node &n, lst &sym_lst)
{
	…...

	// If symbol is in sym_lst, return the existing symbol
	for (auto & s : sym_lst) {
		if (is_a<symbol>(s) && (ex_to<symbol>(s).name == tmp_name)) {
			*this = ex_to<symbol>(s); 

	……

For cases with 500K symbols this becomes unbearably slow (and this has to happen on each MPI-node)

My solution in my little local GiNaC branch was to introduce a second read_archive interface called read_archive_MPI which instead of a  GiNaC::lst &sym_lst gets a std::map<std::string, ginac::ex> map that allows for quickly finding a previously stored symbol of the same name:

void symbol::read_archive_MPI(const archive_node &n, SymbolMap &sym_lst)
{
	inherited::read_archive_MPI(n, sym_lst);
	serial = next_serial++;
	std::string tmp_name;
	n.find_string("name", tmp_name);

	// If symbol is in sym_lst, return the existing symbol
	SymbolMap::iterator it = sym_lst.find(tmp_name);
	if(it != sym_lst.end())
	{
		*this = ex_to<symbol>((it->second));

This approach is ABI compatible to older code but requires read_archive_MPI members in all ginac::basic derived objects. I’d like to stress that this does not introduce any MPI dependencies in GiNaC, the MPI serialisation code is separate and merely utilises the new archiving system.

None the less I figured I might share this with you since I deem it a reasonable improvement for large equation systems and makes GiNaC available for MPI computing.

Let me know what you think

Klaus





More information about the GiNaC-devel mailing list