Salimeh Sekeh wants to teach AI how to manage itself
Salimeh Sekeh wants to help humanity — and she sees artificial intelligence as a key way to do that. At the University of Maine, Sekeh is leading research that designs machine learning models and teaches artificial intelligence to essentially improve itself, which will have impactful applications in Maine and beyond.
Before Sekeh joined the University of Maine in 2019, she was a postdoctoral student at the University of Michigan in electrical engineering and computer science. On the North Campus where she spent most of her days, she often attended events about how artificial intelligence was working to improve everything from autonomous vehicles to medical machinery.
Sekeh was already passionate about algorithm design — the architecture of code that determines what it can and can’t do — and started envisioning how she could use her skills to continue improving these technologies in positive ways.
“AI is a technology that can help us shape the world that we want to live in,” Sekeh says. “It doesn’t apply to one single problem. Anything can be somehow linked because in anything you have data.”
When she was applying for tenure-track positions, she saw an opportunity to bring this interest in machine learning to UMaine, which she learned was planning to heavily invest in machine learning, data science and artificial intelligence research.
Now, Sekeh studies deep neural networks — basically, a subset of artificial intelligence that learns by mimicking our own brains’ complex connections. Deep neural networks are already present in many aspects of daily life, from virtual assistants like Siri and Alexa and autonomously driving cars to photo tagging suggestions on Facebook that seem to get more uncannily accurate every day.
Sekeh explains that no matter their function, deep neural networks need training to perform the tasks they are assigned and make decisions. Facial recognition software, for example, must learn the difference between faces before it can say whose face, exactly, is pictured.
Sekeh used the example of a baby learning how to do basic tasks, like eating. A researcher provides a deep neural network with “training data” like a parent will demonstrate the basic mechanics of eating to a baby, until eventually the baby is able to eat on its own.
However, deep neural networks are complex and require a large amount of computer memory to operate. As the technology and algorithms continue to evolve and improve, figuring out how to compress them without losing their functionality and performance is increasingly important.
Sekeh wants to figure out how to prototype the architecture of these deep neural networks in such a way that they are better able to figure out what learned skills they need for a given task through a process known as “continual learning.” With Sekeh’s algorithm, deep neural networks will be able set aside or “freeze” unneeded functionalities to use for the next task instead of filtering them entirely, as is often the case with existing algorithms.
“Once I learn how to eat, I don’t forget,” Sekeh says. “When you are teaching a baby walking or any other task, it’s not forgetting how to eat. The part of the brain that has already learned to eat and is ‘freezed’ for that. What’s the result of this process is lifelong learning which is what humans are doing.”
In 2021, Sekeh received $80,000 from the National Science Foundation for her research. She hopes that her compression techniques will make deep neural networks less expensive to run in order to expand their use in smaller devices like cell phones and drones and also in limited resource environments — for example, an aerial drone gathering data on a remote forest.
Sekeh’s deep neural network research isn’t limited to compression, though. In 2022, Sekeh received yet another $679,004 grant from the NSF — this time, an Early CAREER Award — to research machine learning robustness, or the ability of the models to deal with noise or perturbations without losing their functionality, performing well even in the face of adversarial conditions.
Think of an autonomous vehicle camera detecting a stop sign, but the image is blurry because the car hit a bump or it is raining outside. A network that lacks robustness may interpret this noisy image as a slow sign, which would put the user in danger.
“We have some data that makes a network vulnerable and fools the network. Our mission is that when we are learning tasks and training deep learning algorithms, we teach the network to be robust towards those adversarial examples.”
Sekeh says that the machine learning industry tends to keep the ideas of robustness and compression separate, but through her research, she aims to unite the two to make better and efficient overall deep neural networks.
“We’re saying, ‘Hold on a second, if you’re doing compression and part of your network gets discarded, isn’t it vulnerable?’” Sekeh says. “Let’s do it simultaneously: compress it and address the robustness. We’re working on them both independently and where they overlap to improve the deep learning models’ performance in an efficient and robust fashion.”
Sekeh envisions many ways that her research can apply to solving problems in Maine and beyond. Robust and efficient deep neural networks will not only make autonomous cars safer to drive even in the snowiest parts of Maine, but it will also make drones and other autonomous research vehicles more accurate and usable for farmers, foresters, marine scientists and more.
Sekeh sees education as an essential element of her work — and not just teaching AI. She is organizing two summer boot camps for undergraduate students at the Roux Institute to learn more about deep neural networks and train the next generation of scientists like her.
Contact: Sam Schipani, samantha.schipani@maine.edu