Skip to content

Commit 2ec3e12

Browse files
committed
Updating research highlights and publication page format
1 parent e2cc898 commit 2ec3e12

18 files changed

+63
-24
lines changed

_data/authors.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ Sriraam Natarajan:
6565
avatar: "/assets/images/headshots/sriraamnatarajan.jpg"
6666
location: "The University of Texas at Dallas"
6767
68-
bio: "Professor of Computer Science. <br> Principal Investigator.<br>AAAI fellow. <br> hessian.AI fellow, TU Darmstadt. <br> RBCDSAI Distinguished Faculty Fellow, IIT Madras."
68+
bio: "Professor of Computer Science. <br> Principal Investigator.<br>AAAI fellow. <br> IJCAI Secretary-Treasurer. <br> hessian.AI fellow, TU Darmstadt. <br> RBCDSAI Distinguished Faculty Fellow, IIT Madras."
6969
twitter: "Sriraam_UTD"
7070
github: "boost-starai"
7171
linkedin: sriraam-natarajan-392a693

_news/2025-09-16-ACS-2025.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
layout: single
3+
title: "Our paper titled Human-Allied Relational Reinforcement Learning has been accepted to ACS 2025"
4+
date: 2025-09-16
5+
categories: news
6+
author: StARLinG Lab
7+
---
8+
9+
10+
Our paper titled "Human-Allied Relational Reinforcement Learning" has been accepted to the 12th Annual Conference on Advances in Cognitive Systems (ACS) 2025!
11+
This paper introduces a novel framework that combines Relational Reinforcement Learning with object-centric representation to handle both structured and unstructured data, and additionally enhances learning by allowing the system to actively query the human expert for guidance by explicitly modeling the uncertainty over the policy. To learn more about this work, please click [here](http://www.cogsys.org/proceedings/2025/paper-2025-9.pdf)

_pages/publications.html

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,45 +4,45 @@
44
title: "Publications"
55
$paragraph-indent: true
66
---
7-
7+
88
<link rel="stylesheet" type="text/css" href="/assets/css/publications.css">
99
<script src="https://code.jquery.com/jquery-3.1.0.min.js" integrity="sha256-cCueBR6CsyA4/9szpPfrX3s49M9vUU5BgtiJj06wt/s=" crossorigin="anonymous"></script>
1010
<script src="https://unpkg.com/[email protected]/dist/isotope.pkgd.js"></script>
11-
11+
1212
<h3>{{page.title}}</h3>
13-
13+
1414
<!-- Buttons for publication type filter starts -->
1515
<div class="button-group filter-button-group">
16-
<a class="button all active" data-filter="*">All</a>
16+
<a class="button all active" data-filter="*">All</a>
1717
{% for category in site.data.publication-type %}
18-
<a class="button" data-filter=".{{ category.id}}">{{ category.name}}</a>
18+
<a class="button" data-filter=".{{ category.id}}">{{ category.name}}</a>
1919
{% endfor %}
2020
</div>
2121
<!-- Buttons for publication type filter ends -->
22-
22+
2323
<!-- publication grouped by year starts-->
2424
{% assign items_grouped = site.data.publications | sort: 'year' | reverse | group_by: 'year' %}
2525
<div class="grid">
2626
{% for group in items_grouped %}
27-
27+
2828
<!-- year starts-->
2929
{% assign all_types = "" %}
3030
{% assign publication_grouped = group.items | group_by: 'publicationType' %}
31-
<div class="year {% for publication in publication_grouped %}{{ all_types | append: ' ' | append: publication.name }}{% endfor %}">{{group.name}}</div>
32-
<!-- year ends-->
33-
31+
<div class="year {% for publication in publication_grouped %}{{ all_types | append: ' ' | append: publication.name }}{% endfor %}">{{group.name}}</div>
32+
<!-- year ends-->
33+
3434
{% assign this_year = group.items | reverse %}
35-
35+
3636
<!-- publications start -->
3737
{% for publication in this_year %}
38-
<div style="" class="element-item {{publication.publicationType}}">{% for author in publication.authors %}{% unless forloop.first %}& {% endunless %}{% assign l_name = author | split: ' ' | last %}{% assign f_name = author | split: ' ' | pop %}{{ l_name }}, {%for name in f_name%}{{name | split:'' | first}}.{% endfor%}, {% endfor %}<i>{% if publication.url%}<a href="{{publication.url}}" >{% endif %}{{publication.title}}{% if publication.url%}</a>{% endif %}</i>, {{publication.venue}} {{publication.year}}. {% if publication.blog %}<a class="icon blog" href="{{publication.blog}}"><i class="far fas fa-blog"></i></a>{% endif %} {% if publication.code %}<a class="icon code" href="{{publication.code}}"><i class="fas fa-fw fa-code"></i></a>{% endif %} {% if publication.slide%}<a class="icon slides" href="{{publication.slide}}"><i class="far fa-fw fa-file-powerpoint"></i></a>{% endif %} {% if publication.video%}<a class="icon video" href="{{publication.video}}"><i class="fas fa-fw fa-video"></i></a>{% endif %} {% if publication.poster %}<a class="icon poster" href="{{publication.poster}}"><i class="far fa-fw fa-file-image"></i></a>{% endif %}</div>
38+
<div style="" class="element-item {{publication.publicationType}}">{% for author in publication.authors %}{% if forloop.first %}{{author}}{% else %}, {{author}}{% endif %}{% endfor %}, {% if publication.url%}<a href="{{publication.url}}" >{% endif %}{{publication.title}},{% if publication.url%}</a>{% endif %} {{publication.venue}} {{publication.year}}. {% if publication.blog %}<a class="icon blog" href="{{publication.blog}}"><i class="far fas fa-blog"></i></a>{% endif %} {% if publication.code %}<a class="icon code" href="{{publication.code}}"><i class="fas fa-fw fa-code"></i></a>{% endif %} {% if publication.slide%}<a class="icon slides" href="{{publication.slide}}"><i class="far fa-fw fa-file-powerpoint"></i></a>{% endif %} {% if publication.video%}<a class="icon video" href="{{publication.video}}"><i class="fas fa-fw fa-video"></i></a>{% endif %} {% if publication.poster %}<a class="icon poster" href="{{publication.poster}}"><i class="far fa-fw fa-file-image"></i></a>{% endif %}</div>
3939
{% endfor %}
40-
<!-- publications end -->
40+
<!-- publications end -->
4141
<div><br></div>
4242
{% endfor %}
4343
</div>
4444
<!-- publication grouped by year ends-->
45-
45+
4646
<script>
4747
// init Isotope
4848
var $grid = $('.grid').isotope({

_posts/2023-05-08-saurabh-mathur.md

Lines changed: 0 additions & 9 deletions
This file was deleted.
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
layout: single
3+
title: 'Sahil Sidheekh: "Building expressive and tractable generative models for reliable, human-allied AI"'
4+
date: 2025-12-01
5+
categories: research-highlights
6+
author: Sahil Sidheekh
7+
---
8+
9+
I work at the intersection of probabilistic generative models and human-allied learning. My primary research focuses on building AI systems that are both expressive and interpretable, i.e models that can reason, explain, and adapt under uncertainty. Much of my work centers on probabilistic circuits (PCs), exploring how structural constraints enable exact inference while still retaining the flexibility of deep models. This includes developing better optimization schemes, learning methods and hybrid deep-probabilistic architectures for PCs.
10+
A second thread of my research focuses on making AI systems more reliable and human-aligned. To this end, I work on credibility-aware multimodal fusion, human-allied learning frameworks, and models that incorporate domain constraints or human feedback to improve controllability and trust. My long-term goal is to build principled, tractable, and transparent generative models that can meaningfully collaborate with people.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
layout: single
3+
title: 'Athresh Karanam: "Developing Explainable and Efficient Machine Learning Algorithms for Deep Tractable Probabilistic Models"'
4+
date: 2025-12-01 15:46:00 -0600
5+
categories: research-highlights
6+
author: Bhagirath Athresh Karanam
7+
---
8+
9+
The advent and seemingly ubiquitous success of deep learning in the past decade, in wide-ranging applications and across multiple data modalities, has been contingent upon availability of an ever-growing amount of data. Although this data-intensive approach is particularly effective in relatively data-rich domains, it imposes prohibitively expensive requirements in terms of data collection, labeling, and computational resources. The problem is compounded by the need for explainable models to build trust amongst domain experts in critical domains such as healthcare. To this end, I am broadly interested in developing effective and explainable machine learning algorithms that are data- and compute-efficient. In particular, I am interested in developing these techniques in the context of deep tractable probabilistic models - a class of probabilistic models capable of answering a wide variety of probabilistic queries exactly and efficiently.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
layout: single
3+
title: 'Nikhilesh Prabhakar: "Integrating Domain Knowledge For Efficient Sequential Decision Making in Relational Domains"'
4+
date: 2025-12-01 15:46:00 -0600
5+
categories: research-highlights
6+
author: Nikhilesh Prabhakar
7+
---
8+
9+
My research focuses on sample-efficient reinforcement learning in relational domains, specifically concentrating on developing decision-making systems capable of using structured domain knowledge. Domain knowledge, instantiated through task hierarchies for planners and D-FOCI based abstractions at the higher levels, can effectively guide lower level policy learning. This integration is crucial for overcoming the combinatorial challenges inherent in large-scale relational environments. A central theme of my work is leveraging symmetries and shared relational structures across tasks and agents to learn policies that inherently generalize to novel object configurations, task variations, and complex multiagent settings. My recent work involved developing a framework that utilizes a planner as a centralized controller to efficiently learn policies that generalize to an increasing number of agents and tasks in relational multiagent domains. Current research directions include learning lifted policies from demonstration data that contains privileged information, and exploring few-shot concept learning augmented with LLM-based priors, both aimed at further advancing the goal of learning efficiently.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)