A famous example of the periodicity forcing word equation is $xy =
yx$. It means that if a word $xy$ equals to a word $yx$, obtained by
a _cut-and-swap_ process, there exists a word $w$ and integers
$i,j$ such that $x = w^i$ and $y = w^j$. The paper's Section 1 and 5 contain a
brief yet very helpful overview of existing results about periodicity
forcing.
About ten years ago, Czeizler, Kari and Seki, in their work [On a
special class of primitive words](https://www.csd.uwo.ca/~lkari/mfcs.pdf), have introduced a notion of a
pseudo-repetition. It can be explained with the following example.
When DNA molecules are regarded as words, the Watson-Crick
correspondence $$G \sim C, A \sim T$$ between guanine ($G$) and
cytosine ($C$), adenine ($A$) and thymine ($T$) allows us to say that
a word $ATCGATTGAGCTCTAGCG$ contains the same information as the word
$TAGCTAACTCGAGATCGC$. A pseudo-repetition is a repetition of a word
or its Watson-Crick complement. So, for instance $ATC \sim TAG$, and
the word $ATCTAGATCATCTAG$ is a pseudo-repetition of the word $ATC$.
In an abstract algebraic setting (involving monoids, morphic
permutations, equivalence classes and anticongruences), the author
studies an extension to the notion of periodicity forcing, using the
recently introduced concept of pseudo-repetitions and related
ideas. Holub proves, for instance, that any periodicity forcing
classical word equation also forces a pseudo-periodicity. He provides
interesting new results and generalizations of already known results.
MathSciNet:MR4074724.
A famous example of the periodicity forcing word equation is $xy =
yx$. It means that if a word $xy$ equals to a word $yx$, obtained by
a _cut-and-swap_ process, there exists a word $w$ and integers
$i,j$ such that $x = w^i$ and $y = w^j$. The paper's Section 1 and 5 contain a
brief yet very helpful overview of existing results about periodicity
forcing.
About ten years ago, Czeizler, Kari and Seki, in their work [On a
special class of primitive words](https://www.csd.uwo.ca/~lkari/mfcs.pdf), have introduced a notion of a
pseudo-repetition. It can be explained with the following example.
When DNA molecules are regarded as words, the Watson-Crick
correspondence $$G \sim C, A \sim T$$ between guanine ($G$) and
cytosine ($C$), adenine ($A$) and thymine ($T$) allows us to say that
a word $ATCGATTGAGCTCTAGCG$ contains the same information as the word
$TAGCTAACTCGAGATCGC$. A pseudo-repetition is a repetition of a word
or its Watson-Crick complement. So, for instance $ATC \sim TAG$, and
the word $ATCTAGATCATCTAG$ is a pseudo-repetition of the word $ATC$.
In an abstract algebraic setting (involving monoids, morphic
permutations, equivalence classes and anticongruences), the author
studies an extension to the notion of periodicity forcing, using the
recently introduced concept of pseudo-repetitions and related
ideas. Holub proves, for instance, that any periodicity forcing
classical word equation also forces a pseudo-periodicity. He provides
interesting new results and generalizations of already known results.
MathSciNet:MR4074724.
Nice coarse-gaining based heuristic to partition the nodes into k sets of similar sizes minimizing the number of edges between those sets.
Random maximal matchings are used to coarse-grain the graph. Maybe fast heuristics to obtain a large maximal matching can be considered, such as using the two rules detailed here: [Karp-Sipser based kernels for bipartite graph matching](https://papers-gamma.link/paper/160).
Publicly available implementation: http://glaros.dtc.umn.edu/gkhome/metis/metis/overview
Nice coarse-gaining based heuristic to partition the nodes into k sets of similar sizes minimizing the number of edges between those sets.
Random maximal matchings are used to coarse-grain the graph. Maybe fast heuristics to obtain a large maximal matching can be considered, such as using the two rules detailed here: [Karp-Sipser based kernels for bipartite graph matching](https://papers-gamma.link/paper/160).
Publicly available implementation: http://glaros.dtc.umn.edu/gkhome/metis/metis/overview
How to efficiently generate combinatorial structures
such as posets, trees, B-trees, graphs, permutations, set partitions, Dyck paths, multisets or necklaces ... ?
The Ruskey's book gives answers.
It contains many surprising descriptions (and bibliographic references)
of constant amortized time (CAT) and loopless algorithms.
> **CAT Algorithms** The holy grail of generating combinatorial objects is to find an algorithm that runs in Constant Amortized Time. This means that the amount of computation, after a small amount of preprocessing, is proportional to the number of objects that are listed. We do not count the time to actually output or process the objects; we are only concerned with the amount of data structure change that occurs as the objects are being generated.
_ Ruskey's book, page 8 _
> ** Loopless algorithm ** is an imperative algorithm that generates successive combinatorial objects, such as partitions, permutations, and combinations, in constant time and the first object in linear time
_ https://en.wikipedia.org/wiki/Loopless_algorithm _
How to efficiently generate combinatorial structures
such as posets, trees, B-trees, graphs, permutations, set partitions, Dyck paths, multisets or necklaces ... ?
The Ruskey's book gives answers.
It contains many surprising descriptions (and bibliographic references)
of constant amortized time (CAT) and loopless algorithms.
> **CAT Algorithms** The holy grail of generating combinatorial objects is to find an algorithm that runs in Constant Amortized Time. This means that the amount of computation, after a small amount of preprocessing, is proportional to the number of objects that are listed. We do not count the time to actually output or process the objects; we are only concerned with the amount of data structure change that occurs as the objects are being generated.
_ Ruskey's book, page 8 _
> ** Loopless algorithm ** is an imperative algorithm that generates successive combinatorial objects, such as partitions, permutations, and combinations, in constant time and the first object in linear time
_ https://en.wikipedia.org/wiki/Loopless_algorithm _
Comments: