Maximilian Zorn, M.Sc.

Maximilian Zorn, M.Sc.

Lehrstuhl für Mobile und Verteilte Systeme

Ludwig-Maximilians-Universität München, Institut für Informatik

Oettingenstraße 67
80538 München

Zoom Personal Meeting Room:
lmu-munich.zoom.us/my/max.zorn

Raum E105

Telefon: +49 89 / 2180-9259 (derzeit nicht besetzt)

Fax:

Mail: maximilian.zorn@ifi.lmu.de


🔬 Research Interests

  • (Quantum/Hybrid) Reinforcement Learning
  • Quantum Circuit-Construction
  • Self-Replication (Properties) in Neural Networks
  • Cooperation in Multi-Agent Systems

🎓 Teaching (Assistance)

📚 Publications

on Google Scholar, ResearchGate, and LinkedIn

2024

  • M. Zorn, P. Altmann, G. Stenzel, M. Kölle, C. Linnhoff-Popien, and T. Gabor, Self-Adaptive Robustness of Applied Neural-Network-Soups, 2024. doi:10.1162/isal_a_00811
    [BibTeX] [Download PDF]
    @proceedings{zorn24selfadapt,
    author = {Zorn, Maximilian and Altmann, Philipp and Stenzel, Gerhard and Kölle, Michael and Linnhoff-Popien, Claudia and Gabor, Thomas},
    title = "{Self-Adaptive Robustness of Applied Neural-Network-Soups}",
    volume = {ALIFE 2024: Proceedings of the 2024 Artificial Life Conference},
    series = {Artificial Life Conference Proceedings},
    pages = {74},
    year = {2024},
    month = {07},
    doi = {10.1162/isal_a_00811},
    url = {https://doi.org/10.1162/isal\_a\_00811},
    eprint = {https://direct.mit.edu/isal/proceedings-pdf/isal2024/36/74/2461231/isal\_a\_00811.pdf},
    }

  • M. Zorn, S. Gerner, P. Altmann, and T. Gabor, „Final Productive Fitness for Surrogates in Evolutionary Algorithms,“ in Proceedings of the Genetic and Evolutionary Computation Conference Companion, New York, NY, USA, 2024, p. 583–586. doi:10.1145/3638530.3654433
    [BibTeX] [Download PDF]
    @inproceedings{zorn24fpf,
    author = {Zorn, Maximilian and Gerner, Sarah and Altmann, Philipp and Gabor, Thomas},
    title = {Final Productive Fitness for Surrogates in Evolutionary Algorithms},
    year = {2024},
    isbn = {9798400704956},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3638530.3654433},
    doi = {10.1145/3638530.3654433},
    booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference Companion},
    pages = {583–586},
    numpages = {4},
    keywords = {evolutionary algorithms, dynamic objective, surrogate, productive fitness},
    location = {Melbourne, VIC, Australia},
    series = {GECCO '24 Companion}
    }

  • S. Zielinski, M. Zorn, T. Gabor, S. Feld, and C. Linnhoff-Popien, „Using an Evolutionary Algorithm to Create (MAX)-3SAT QUBOs,“ in Proceedings of the Genetic and Evolutionary Computation Conference Companion, New York, NY, USA, 2024, p. 1984–1992. doi:10.1145/3638530.3664153
    [BibTeX] [Download PDF]
    @inproceedings{zielinski24using,
    author = {Zielinski, Sebastian and Zorn, Maximilian and Gabor, Thomas and Feld, Sebastian and Linnhoff-Popien, Claudia},
    title = {Using an Evolutionary Algorithm to Create (MAX)-3SAT QUBOs},
    year = {2024},
    isbn = {9798400704956},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3638530.3664153},
    doi = {10.1145/3638530.3664153},
    booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference Companion},
    pages = {1984–1992},
    numpages = {9},
    keywords = {QUBO, (MAX)-3SAT, combinatorial optimization, evolutionary algorithm},
    location = {Melbourne, VIC, Australia},
    series = {GECCO '24 Companion}
    }

  • J. Stein, N. Roshani, M. Zorn, P. Altmann, M. Kölle, and C. Linnhoff-Popien, „Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly,“ in Proceedings of the 16th International Conference on Agents and Artificial Intelligence – Volume 2: ICAART, 2024, pp. 99-109. doi:10.5220/0012312500003636
    [BibTeX]
    @inproceedings{stein2023improving,
    title={Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly},
    author={Stein, Jonas and Roshani, Navid and Zorn, Maximilian and Altmann, Philipp and K{\"o}lle, Michael and Linnhoff-Popien, Claudia},
    booktitle={Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART},
    year={2024},
    pages={99-109},
    publisher={SciTePress},
    organization={INSTICC},
    doi={10.5220/0012312500003636},
    isbn={978-989-758-680-4},
    issn={2184-433X},
    }

  • J. Stein, T. Rohe, F. Nappi, J. Hager, D. Bucher, M. Zorn, M. Kölle, and C. Linnhoff-Popien, „Introducing Reducing-Width-QNNs, an AI-inspired Ansatz design pattern,“ in Proceedings of the 16th International Conference on Agents and Artificial Intelligence – Volume 3: ICAART, 2024, pp. 1127-1134. doi:10.5220/0012449800003636
    [BibTeX]
    @inproceedings{stein2023introducing,
    title={Introducing Reducing-Width-QNNs, an AI-inspired Ansatz design pattern},
    author={Stein, Jonas and Rohe, Tobias and Nappi, Francesco and Hager, Julian and Bucher, David and Zorn, Maximilian and K{\"o}lle, Michael and Linnhoff-Popien, Claudia},
    booktitle={Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
    year={2024},
    pages={1127-1134},
    publisher={SciTePress},
    organization={INSTICC},
    doi={10.5220/0012449800003636},
    isbn={978-989-758-680-4},
    issn={2184-433X},
    }

  • M. Kölle, M. Hgog, F. Ritz, P. Altmann, M. Zorn, J. Stein, and C. Linnhoff-Popien, „Quantum Advantage Actor-Critic for Reinforcement Learning,“ in Proceedings of the 16th International Conference on Agents and Artificial Intelligence – Volume 1: ICAART, 2024, pp. 297-304. doi:10.5220/0012383900003636
    [BibTeX]
    @inproceedings{kolle2024quantum,
    title={Quantum Advantage Actor-Critic for Reinforcement Learning},
    author={K{\"o}lle, Michael and Hgog, Mohamad and Ritz, Fabian and Altmann, Philipp and Zorn, Maximilian and Stein, Jonas and Linnhoff-Popien, Claudia},
    booktitle={Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
    year={2024},
    pages={297-304},
    publisher={SciTePress},
    organization={INSTICC},
    doi={10.5220/0012383900003636},
    isbn={978-989-758-680-4},
    issn={2184-433X},
    }

  • M. Kölle, T. Schubert, P. Altmann, M. Zorn, J. Stein, and C. Linnhoff-Popien, „A Reinforcement Learning Environment for Directed Quantum Circuit Synthesis,“ in Proceedings of the 16th International Conference on Agents and Artificial Intelligence – Volume 1: ICAART, 2024, pp. 83-94. doi:10.5220/0012383200003636
    [BibTeX]
    @inproceedings{kolle2024reinforcement,
    title={A Reinforcement Learning Environment for Directed Quantum Circuit Synthesis},
    author={K{\"o}lle, Michael and Schubert, Tom and Altmann, Philipp and Zorn, Maximilian and Stein, Jonas and Linnhoff-Popien, Claudia},
    booktitle={Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
    year={2024},
    pages={83-94},
    publisher={SciTePress},
    organization={INSTICC},
    doi={10.5220/0012383200003636},
    isbn={978-989-758-680-4},
    issn={2184-433X},
    }

2023

  • T. Phan, F. Ritz, P. Altmann, M. Zorn, J. NNüßlein, M. Kölle, T. Gabor, and C. Linnhoff-Popien, „Attention-Based Recurrence for Multi-Agent Reinforcement Learning under Stochastic Partial Observability,“ in Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.
    [BibTeX] [Download PDF]
    @inproceedings{phanICML23,
    author = {Thomy Phan and Fabian Ritz and Philipp Altmann and Maximilian Zorn and Jonas NN{\"u}{\ss}lein and Michael K{\"o}lle and Thomas Gabor and Claudia Linnhoff-Popien},
    title = {Attention-Based Recurrence for Multi-Agent Reinforcement Learning under Stochastic Partial Observability},
    year = {2023},
    publisher = {PMLR},
    booktitle = {Proceedings of the 40th International Conference on Machine Learning (ICML)},
    location = {Hawaii, USA},
    url  = {https://thomyphan.github.io/publication/2023-07-01-icml-phan},
    eprint  = {https://thomyphan.github.io/files/2023-icml-preprint.pdf},
    }

  • J. Stein, F. Chamanian, M. Zorn, J. Nüßlein, S. Zielinski, M. Kölle, and C. Linnhoff-Popien, „Evidence that PUBO outperforms QUBO when solving continuous optimization problems with the QAOA,“ , p. 2254–2262, 2023. doi:10.1145/3583133.3596358
    [BibTeX] [Download PDF]
    @article{stein2023evidence,
    title={Evidence that PUBO outperforms QUBO when solving continuous optimization problems with the QAOA},
    author={Stein, Jonas and Chamanian, Farbod and Zorn, Maximilian and N{\"u}{\ss}lein, Jonas and Zielinski, Sebastian and K{\"o}lle, Michael and Linnhoff-Popien, Claudia},
    year = {2023},
    isbn = {9798400701207},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3583133.3596358},
    doi = {10.1145/3583133.3596358},
    booktitle = {Proceedings of the Companion Conference on Genetic and Evolutionary Computation},
    pages = {2254–2262},
    numpages = {9},
    location = {Lisbon, Portugal},
    series = {GECCO '23 Companion}
    }

  • M. Kölle, S. Illium, M. Zorn, J. Nüßlein, P. Suchostawski, and C. Linnhoff-Popien, „Improving Primate Sounds Classification using Binary Presorting for Deep Learning.“ 2023.
    [BibTeX]
    @inproceedings {koelle23primate,
    title = {Improving Primate Sounds Classification using Binary Presorting for Deep Learning},
    author = {K{\"o}lle, Michael and Illium, Steffen and Zorn, Maximilian and N{\"u}{\ss}lein, Jonas and Suchostawski, Patrick and Linnhoff-Popien, Claudia},
    year = {2023},
    organization = {Int. Conference on Deep Learning Theory and Application - DeLTA 2023},
    publisher = {Springer CCIS Series},
    }

  • M. Zorn, S. Illium, T. Phan, T. K. Kaiser, C. Linnhoff-Popien, and T. Gabor, „Social Neural Network Soups with Surprise Minimization.“ 2023, p. 65. doi:10.1162/isal_a_00671
    [BibTeX] [Download PDF]
    @inproceedings{zorn23surprise,
    author = {Zorn, Maximilian and Illium, Steffen and Phan, Thomy and Kaiser, Tanja Katharina and Linnhoff-Popien, Claudia and Gabor, Thomas},
    title = {Social Neural Network Soups with Surprise Minimization},
    volume = {ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference},
    series = {ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference},
    pages = {65},
    year = {2023},
    month = {07},
    doi = {10.1162/isal_a_00671},
    url = {https://doi.org/10.1162/isal\_a\_00671},
    eprint = {https://direct.mit.edu/isal/proceedings-pdf/isal/35/65/2149250/isal\_a\_00671.pdf},
    }

2022

  • S. Illium, G. Griffin, M. Kölle, M. Zorn, J. Nüßlein, and C. Linnhoff-Popien, VoronoiPatches: Evaluating A New Data Augmentation MethodarXiv, 2022. doi:10.48550/ARXIV.2212.10054
    [BibTeX] [Download PDF]
    @misc{https://doi.org/10.48550/arxiv.2212.10054,
    doi = {10.48550/ARXIV.2212.10054},
    url = {https://arxiv.org/abs/2212.10054},
    author = {Illium, Steffen and Griffin, Gretchen and Kölle, Michael and Zorn, Maximilian and Nüßlein, Jonas and Linnhoff-Popien, Claudia},
    keywords = {Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {VoronoiPatches: Evaluating A New Data Augmentation Method},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
    }

  • S. Illium, M. Zorn, C. Lenta, M. Kölle, C. Linnhoff-Popien, and T. Gabor, Constructing Organism Networks from Collaborative Self-ReplicatorsarXiv, 2022. doi:10.48550/ARXIV.2212.10078
    [BibTeX] [Download PDF]
    @misc{https://doi.org/10.48550/arxiv.2212.10078,
    doi = {10.48550/ARXIV.2212.10078},
    url = {https://arxiv.org/abs/2212.10078},
    author = {Illium, Steffen and Zorn, Maximilian and Lenta, Cristian and Kölle, Michael and Linnhoff-Popien, Claudia and Gabor, Thomas},
    keywords = {Neural and Evolutionary Computing (cs.NE), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Constructing Organism Networks from Collaborative Self-Replicators},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
    }

  • T. Gabor, M. Zorn, and C. Linnhoff-Popien, „The Applicability of Reinforcement Learning for the Automatic Generation of State Preparation Circuits,“ in Proceedings of the Genetic and Evolutionary Computation Conference Companion, New York, NY, USA, 2022, p. 2196–2204. doi:10.1145/3520304.3534039
    [BibTeX] [Abstract] [Download PDF]

    State preparation is currently the only means to provide input data for quantum algorithm, but finding the shortest possible sequence of gates to prepare a given state is not trivial. We approach this problem using reinforcement learning (RL), first on an agent that is trained to only prepare a single fixed quantum state. Despite the overhead of training a whole network to just produce one single data point, gradient-based backpropagation appears competitive to genetic algorithms in this scenario and single state preparation thus seems a worthwhile task. In a second case we then train a single network to prepare arbitrary quantum states to some degree of success, despite a complete lack of structure in the training data set. In both cases we find that training is severely improved by using QR decomposition to automatically map the agents‘ outputs to unitary operators to solve the problem of sparse rewards that usually makes this task challenging.

    @inproceedings{10.1145/3520304.3534039,
    author = {Gabor, Thomas and Zorn, Maximilian and Linnhoff-Popien, Claudia},
    title = {The Applicability of Reinforcement Learning for the Automatic Generation of State Preparation Circuits},
    year = {2022},
    isbn = {9781450392686},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3520304.3534039},
    doi = {10.1145/3520304.3534039},
    abstract = {State preparation is currently the only means to provide input data for quantum algorithm, but finding the shortest possible sequence of gates to prepare a given state is not trivial. We approach this problem using reinforcement learning (RL), first on an agent that is trained to only prepare a single fixed quantum state. Despite the overhead of training a whole network to just produce one single data point, gradient-based backpropagation appears competitive to genetic algorithms in this scenario and single state preparation thus seems a worthwhile task. In a second case we then train a single network to prepare arbitrary quantum states to some degree of success, despite a complete lack of structure in the training data set. In both cases we find that training is severely improved by using QR decomposition to automatically map the agents' outputs to unitary operators to solve the problem of sparse rewards that usually makes this task challenging.},
    booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference Companion},
    pages = {2196–2204},
    numpages = {9},
    keywords = {state preparation, actor/critic, quantum computing, circuit design, neural network, reinforcement learning},
    location = {Boston, Massachusetts},
    series = {GECCO '22}
    }

  • T. Gabor, S. Illium, M. Zorn, C. Lenta, A. Mattausch, L. Belzner, and C. Linnhoff-Popien, „Self-Replication in Neural Networks,“ Artificial Life, pp. 205-223, 2022. doi:10.1162/artl_a_00359
    [BibTeX] [Abstract] [Download PDF]

    {A key element of biological structures is self-replication. Neural networks are the prime structure used for the emergent construction of complex behavior in computers. We analyze how various network types lend themselves to self-replication. Backpropagation turns out to be the natural way to navigate the space of network weights and allows non-trivial self-replicators to arise naturally. We perform an in-depth analysis to show the self-replicators’ robustness to noise. We then introduce artificial chemistry environments consisting of several neural networks and examine their emergent behavior. In extension to this works previous version (Gabor et al., 2019), we provide an extensive analysis of the occurrence of fixpoint weight configurations within the weight space and an approximation of their respective attractor basins.}

    @article{10.1162/artl_a_00359,
    author = {Gabor, Thomas and Illium, Steffen and Zorn, Maximilian and Lenta, Cristian and Mattausch, Andy and Belzner, Lenz and Linnhoff-Popien, Claudia},
    title = {{Self-Replication in Neural Networks}},
    journal = {Artificial Life},
    pages = {205-223},
    year = {2022},
    month = {06},
    abstract = {{A key element of biological structures is self-replication. Neural networks are the prime structure used for the emergent construction of complex behavior in computers. We analyze how various network types lend themselves to self-replication. Backpropagation turns out to be the natural way to navigate the space of network weights and allows non-trivial self-replicators to arise naturally. We perform an in-depth analysis to show the self-replicators’ robustness to noise. We then introduce artificial chemistry environments consisting of several neural networks and examine their emergent behavior. In extension to this works previous version (Gabor et al., 2019), we provide an extensive analysis of the occurrence of fixpoint weight configurations within the weight space and an approximation of their respective attractor basins.}},
    issn = {1064-5462},
    doi = {10.1162/artl_a_00359},
    url = {https://doi.org/10.1162/artl\_a\_00359},
    eprint = {https://direct.mit.edu/artl/article-pdf/doi/10.1162/artl\_a\_00359/2030914/artl\_a\_00359.pdf}
    }

2021

  • T. Gabor, S. Illium, M. Zorn, and C. Linnhoff-Popien, Goals for Self-Replicating Neural Networks, 2021. doi:10.1162/isal_a_00439
    [BibTeX] [Download PDF]
    @proceedings{10.1162/isal_a_00439,
    author = {Gabor, Thomas and Illium, Steffen and Zorn, Maximilian and Linnhoff-Popien, Claudia},
    title = {{Goals for Self-Replicating Neural Networks}},
    volume = {ALIFE 2021: The 2021 Conference on Artificial Life},
    series = {ALIFE 2021: The 2021 Conference on Artificial Life},
    year = {2021},
    month = {07},
    doi = {10.1162/isal_a_00439},
    url = {https://doi.org/10.1162/isal\_a\_00439},
    note = {101}
    }

📚 Community

🎓 Theses

  • Sara Oropeza, Thomas Gabor, Maximilian Zorn, Claudia Linnhoff-Popien, Update Behavior of Neural Networks Trained in Alternation with an Additional Auxiliary Task, Bachelor’s Thesis 2024
  • Marie Brockschmidt, Thomas Gabor, Maximilian Zorn, Claudia Linnhoff-Popien, Evolutionary Algorithm with Similarity-Based Variation for Job-Shop, Bachelor’s Thesis 2024
  • Gregor Reischl, Maximilian Zorn, Michael Kölle, Claudia Linnhoff-Popien, Learning Independent Multi-Agent Flocking Behavior With Reinforcement Learning, Bachelor’s Thesis 2024
  • Julian Thomas Reff, Thomas Gabor, Maximilian Zorn, Claudia Linnhoff-Popien, Neural Networks with a Regulatory Second Task on Neuron Level, Master’s Thesis 2024
  • Jonathan Philip Wulf, Jonas Stein, Maximilian Zorn, Claudia Linnhoff-Popien, State Preparation on Quantum Hardware Using an Island Genetic Algorithm, Master’s Thesis 2024
  • Jonas Blenninger, Jonas Stein, Maximilian Zorn, Claudia Linnhoff-Popien, CUAOA: A Novel CUDA-Accelerated Simulation Framework for the Quantum Approximate Optimization Algorithm, Master’s Thesis 2024
  • Paulin Anwander, Thomas Gabor, Maximilian Zorn, Claudia Linnhoff-Popien, Measuring Relatedness in Evolutionary Algorithms via Superfluous Genes, Bachelor’s Thesis, 2024
  • Jonas Wild, Maximilian Zorn, Philipp Altmann, Claudia Linnhoff-Popien, Designing Meta-Rewards for Multi-Agent Reinforcement Learning Cooperation, Master’s Thesis 2024
  • Daniel Seidl, Michael Kölle, Maximilian Zorn, Claudia Linnhoff-Popien, Evaluierung von metaheuristischen Optimierungsalgorithmen für Quantum Reinforcement Learning, Master’s Thesis 2024
  • Ioan-Luca Ionescu, Maximilian Zorn, Fabian Ritz, Claudia Linnhoff-Popien, Specification Aware Evolutionary Error Search in Parameterized RL Environments, Master’s Thesis, 2024
  • Clara Goldmann, Fabian Ritz, Maximilian Zorn, Claudia Linnhoff-Popien, Balancing Populations with Multi-Agent Reinforcement Learning, Master’s Thesis, 2024
  • Simon Hackner, Philipp Altmann, Maximilian Zorn, Claudia Linnhoff-Popien, Diversity-Driven Pre-Training for Efficient Transfer RL, Bachelor’s Thesis, 2023
  • Moritz Glawleschkoff, Thomas Gabor, Maximilian Zorn, Claudia Linnhoff-Popien. Empowerment for Evolutionary Algorithms. Bachelor’s Thesis, 2023
  • Matthias Fruth, Fabian Ritz, Maximilian Zorn, Claudia Linnhoff-Popien. The Impact of Action Order in Multi-Agent Reinforcement Learning. Master’s Thesis, 2023