Skip to content

Commit 67b3f52

Browse files
committed
added abstracts
1 parent 6edd0dd commit 67b3f52

17 files changed

+86
-12
lines changed

_layouts/bib.html

+4-3
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,7 @@
6464
{% endif %}
6565

6666
<div class="links">
67-
{% if entry.abstract %}
68-
<a class="abstract btn btn-sm z-depth-0" role="button">Abstract</a>
69-
{% endif %}
67+
7068
{% if entry.arxiv %}
7169
<a href="http://arxiv.org/abs/{{ entry.arxiv }}" class="btn btn-sm z-depth-0" role="button" target="_blank">arXiv</a>
7270
{% endif %}
@@ -110,6 +108,9 @@
110108
{% if entry.website %}
111109
<a href="{{ entry.website }}" class="btn btn-sm z-depth-0" role="button" target="_blank">Website</a>
112110
{% endif %}
111+
{% if entry.abstract %}
112+
<a class="abstract btn btn-sm z-depth-0" role="button">Abstract</a>
113+
{% endif %}
113114
</div>
114115

115116
<!-- Hidden abstract block -->

_layouts/works.html

+3-3
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,6 @@
5858
{% endif %}
5959

6060
<div class="links">
61-
{% if entry.abstract %}
62-
<a class="abstract btn btn-sm z-depth-0" role="button">Abstract</a>
63-
{% endif %}
6461
{% if entry.arxiv %}
6562
<a href="http://arxiv.org/abs/{{ entry.arxiv }}" class="btn btn-sm z-depth-0" role="button" target="_blank">arXiv</a>
6663
{% endif %}
@@ -104,6 +101,9 @@
104101
{% if entry.website %}
105102
<a href="{{ entry.website }}" class="btn btn-sm z-depth-0" role="button" target="_blank">Website</a>
106103
{% endif %}
104+
{% if entry.abstract %}
105+
<a class="abstract btn btn-sm z-depth-0" role="button">Abstract</a>
106+
{% endif %}
107107
</div>
108108

109109
<!-- Hidden abstract block -->

_pages/publications.md

+17-5
Original file line numberDiff line numberDiff line change
@@ -19,17 +19,29 @@ nav: true
1919
<a href="{{ project.url | relative_url }}">
2020
{% endif %} -->
2121
{% if project.img %}
22-
<div class="col-sm-4">
22+
<div class="col-sm-3">
2323
<img class="img-fluid" src="{{ project.img | relative_url }}" alt="project thumbnail">
2424
</div>
2525
{% endif %}
26-
<div class="col-sm-8">
26+
<div class="col-sm-9">
2727
<h3 class="card-title">{{ project.title }}</h3>
2828
<p class="card-text">{{ project.description }}</p>
29-
<div class="row abbr ml-1 p-0">
30-
<a href="{{ project.pdf }}" class="btn btn-sm z-depth-0 m-0" role="button" target="_blank">{{project.type}} <i class="fas fa-download"></i></a>
29+
<div class="row abbr ml-1 p-0 pubs">
30+
<div class="links">
31+
<a href="{{ project.pdf }}" class="btn btn-sm z-depth-0 m-0" role="button" target="_blank">{{project.type}} <i class="fas fa-download"></i></a>
32+
{% if project.abstract %}
33+
<a class="abstract btn btn-sm z-depth-0" role="button">Abstract</a>
34+
{% endif %}
35+
</div>
36+
{% if project.abstract %}
37+
<div class="abstract hidden">
38+
<p>{{ project.abstract }}</p>
39+
</div>
40+
{% endif %}
41+
3142
</div>
32-
<div class="row ml-1 mr-1 p-0">
43+
<div class="row ml-1 mr-1 p-0 pubs">
44+
3345
{% if project.github %}
3446
<div class="github-icon">
3547
<div class="icon" data-toggle="tooltip" title="Code Repository">

_publications/1_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/machinetask.png
66
importance: 8
77
type: Short Paper
88
pdf: /pdfs/On_Optimizing_Human-Machine_Task_Assignments.pdf
9+
abstract: When crowdsourcing systems are used in combination with machine inference systems in the real world, they benefit the most when the machine system is deeply integrated with the crowd workers. However, if researchers wish to integrate the crowd with “off-the-shelf” machine classifiers, this deep integration is not always possible. This work explores two strategies to increase accuracy and decrease cost under this setting. First, we show that reordering tasks presented to the human can create a significant accuracy improvement. Further, we show that greedily choosing parameters to maximize machine accuracy is sub-optimal, and joint optimization of the combined system improves performance.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/2_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/compliance.png
66
pdf: /pdfs/Environmental_Compliance_ACM_DEV_15.pdf
77
type: Poster
88
importance: 9
9+
abstract: Industrial projects in India have to agree to specific sets of environmental conditions in order to function. Lack of compliance with these conditions results both in irreversible damage to the local environment as well as conflicts among the industry and the local community. Our aim is to provide a system that raises general awareness in the local community about the environmental conditions in vogue among the nearby industries so that compliance violations can be reported early on. We outline work in progress to mine the text of the clearance conditions and build a searchable mapping system that can answer various queries about these conditions.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/3_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/notecode.png
66
pdf: /pdfs/NoteCode_ACM_TEI_15.pdf
77
type: Poster
88
importance: 3
9+
abstract: We present the design of Note Code – a music programming puzzle game designed as a tangible device coupled with a Graphical User Interface (GUI). Tapping patterns and placing boxes in proximity enables programming these ‘note-boxes’ to store sets of notes, play them back and activate different subcomponents or neighboring boxes. This system provides users the opportunity to learn a variety of computational concepts, including functions, function calling and recursion, conditionals, as well as engage in composing music. The GUI adds a dimension of viewing the created programs and interacting with a set of puzzles that help discover the various computational concepts in the pursuit of creating target tunes, and optimizing the program made.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/4_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/visualmath.png
66
pdf: /pdfs/visualMath_cameraReady_iui17.pdf
77
type: Poster
88
importance: 4
9+
abstract: Math word problems are difficult for students to start with since they involve understanding the problem?s context and abstracting out its underlying mathematical operations. A visual understanding of the problem at hand can be very useful for the comprehension of the problem. We present a system VisualMath that uses machine learning tools and crafted visual logic to automatically generate appropriate visualizations from the text of the word-problems and solve it. We demonstrate the improvements in the understanding of math word-problems by conducting a user study and learning of meaning of relevant new words by students.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/5_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/eyamkayo.png
66
importance: 5
77
type: Poster
88
pdf: /pdfs/eyamkayo-interactive-gaze.pdf
9+
abstract: This paper introduces EyamKayo, a first-of-its-kind interactive CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), using eye gaze and facial expression based human interactions, to better distinguish humans from software robots. Our system generates a sequence of instructions, asking the user to follow a controlled sequence of facial expressions. We evaluate user comfort and system usability, and validate using usability tests.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/6_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/optidwell.png
66
importance: 3
77
type: Full Paper
88
pdf: /pdfs/optidwell-intelligent-adjustment(3).pdf
9+
abstract: Gaze based navigation with digital screens offer a hands-free and touchless interaction, which is often useful in providing a hygienic interaction experience in a public kiosk scenario. The goodness of such a navigation system depends not only on the accuracy of detecting the eye gaze but also on the ability to determine whether a user is interested in clicking a button or is just looking at the button. The time for which a user needs to gaze at a particular button before it is considered as a click action is called the dwell time. In this paper, we explore intelligent adjustment of dwell times, where mouse click events on the buttons of a given application are emulated with user gaze. A constant dwell-time for all buttons and for all users may not provide an efficient and intuitive interface. We thereby propose a model to dynamically adjust dwell-time values used to emulate user mouse click events, exploiting the user’s experience with different portions of a given application. The adjustment happens at a per-user, per-button granularity, as a function of the user’s (a) prior usage experience of the given button within the application and (b) Midas touch characteristics for the given button. We propose OptiDwell, inspired by the action-value method based solutions to the Multi-Armed Bandits problem, for dwell click time adaptation. We experiment OptiDwell using an interactive TV channel browsing interface application, constituting of a mix of text and image buttons, over 10 computer-savvy users generating over 9000 click tasks. We observe significant improvement of user comfort level over the sessions, quantified by (a) improved (reduced) dwell times and (b) reduced number of Midas touches in spite of faster dwell-clicks, as high as 10-fold reduction in the best case. Our work is useful for creating an interface, with accurate, fast and comfortable dwell-clicks for each interface element (e.g., buttons), and each user.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/7_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/incluset.png
66
importance: 1
77
type: Poster
88
pdf: https://dl.acm.org/doi/10.1145/3373625.3418026
9+
abstract: Datasets and data sharing play an important role for innovation, benchmarking, mitigating bias, and understanding the complexity of real world AI-infused applications. However, there is a scarcity of available data generated by people with disabilities with the potential for training or evaluating machine learning models. This is partially due to smaller populations, disparate characteristics, lack of expertise for data annotation, as well as privacy concerns. Even when data are collected and are publicly available, it is often difcult to locate them. We present a novel data surfacing repository, called IncluSet, that allows researchers and the disability community to discover and link accessibility datasets. The repository is pre-populated with information about 139 existing datasets - 65 made publicly available, 25 available upon request, and 49 not shared by the authors but described in their manuscripts. More importantly, IncluSet is designed to expose existing and new dataset contributions so they may be discoverable through Google Dataset Search.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/8_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/dataset_examples.png
66
importance: 2
77
pdf: https://www.researchgate.net/publication/348844784_Data_Sharing_in_Wellness_Accessibility_and_Aging
88
type: Poster
9+
abstract: Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years. Our analysis highlights how researchers navigate tensions between benefits and risks in data collection and sharing. We uncover patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus. We conclude by critically reflecting on challenges and opportunities related to locating and sharing accessibility datasets calling for technical, legal, and institutional privacy frameworks that are more attuned to concerns from these communities.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/9_project.markdown

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/vocab.png
66
importance: 5
77
type: Full Paper
88
pdf: https://files.eric.ed.gov/fulltext/ED593200.pdf
9+
abstract: As education gets increasingly digitized, and intelligent tutoring systems gain commercial prominence, scalable assessment generation mechanisms become a critical requirement for enabling increased learning outcomes. Assessments provide a way to measure learners' level of understanding and difficulty, and personalize their learning. There have been separate efforts in di erent areas to solve this by looking at different parts of the problem. This paper is a  rst effort to bring together techniques from diverse areas such as knowledge representation and reasoning, machine learning, inference on graphs, and pedagogy to generate automated assessments at scale. In this paper, we speci cally address the problem of Multiple Choice Question (MCQ) generation for vocabulary learning assessments, specially catered to young learners (YL). We evaluate the e cacy of our approach by asking human annotators to annotate the questions generated by the system based on relevance. We also compare our approach with one baseline model and report high usability of MCQs generated by our system compared to the baseline.
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/assets21.md

+1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ img: /assets/img/inclusetproject.png
66
importance: -1
77
type: Full Paper
88
pdf: https://arxiv.org/abs/2108.10665
9+
abstract: Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence. We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers.Children trained image classifiers and tested each other's models for robustness. Our study illuminates how children reason about ML concepts, offering these insights for designing machine teaching experiences for children - (i) ML metrics (\eg confidence scores should be visible for experimentation; (ii) ML activities should enable children to exchange models for promoting reflection and pattern recognition; and (iii) the interface should allow quick data inspection (\eg images vs. gestures).
910
---
1011

1112
Every project has a beautiful feature showcase page.

_publications/vlhcc21.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
---
22
layout: page
33
title: Exploring Machine Teaching with Children
4-
description: <b>Utkarsh, Dwivedi</b> Jaina Gandhi, Raj Parikh, Merijke Coenraad, Elizabeth Bonsignore and, Hernisa Kacorri. Exploring Machine Teaching with Children. <i>IEEE Symposium on Visual Languages and Human-Centric Computing (VLHCC '21).</i>
4+
description: <b>Dwivedi, U.,</b> Gandhi, J., Parikh, R., Coenraad, M., Bonsignore, E., and Kacorri, H. Exploring Machine Teaching with Children. <i>IEEE Symposium on Visual Languages and Human-Centric Computing (VLHCC '21).</i>
55
img: /assets/img/ipaper2.png
6+
abstract: Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence. We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers.Children trained image classifiers and tested each other's models for robustness. Our study illuminates how children reason about ML concepts, offering these insights for designing machine teaching experiences for children - (i) ML metrics (\eg confidence scores should be visible for experimentation; (ii) ML activities should enable children to exchange models for promoting reflection and pattern recognition; and (iii) the interface should allow quick data inspection (\eg images vs. gestures).
67
importance: -1
78
type: Full Paper
89
pdf: https://arxiv.org/abs/2109.11434

_sass/_base.scss

+50
Original file line numberDiff line numberDiff line change
@@ -493,7 +493,57 @@ footer.sticky-bottom {
493493
}
494494
}
495495
}
496+
.pubs {
497+
.links {
498+
a.btn {
499+
color: var(--global-text-color);
500+
border: 1px solid var(--global-text-color);
501+
padding-left: 1rem;
502+
padding-right: 1rem;
503+
padding-top: 0.25rem;
504+
padding-bottom: 0.25rem;
505+
&:hover {
506+
color: var(--global-theme-color);
507+
border-color: var(--global-theme-color);
508+
}
509+
}
510+
}
511+
.hidden {
512+
font-size: 0.875rem;
513+
max-height: 0px;
514+
overflow: hidden;
515+
text-align: justify;
516+
-webkit-transition: 0.15s ease;
517+
-moz-transition: 0.15s ease;
518+
-ms-transition: 0.15s ease;
519+
-o-transition: 0.15s ease;
520+
transition: all 0.15s ease;
496521

522+
p {
523+
line-height: 1.4em;
524+
margin: 10px;
525+
}
526+
pre {
527+
font-size: 1em;
528+
line-height: 1.4em;
529+
padding: 10px;
530+
}
531+
}
532+
.hidden.open {
533+
max-height: 100em;
534+
-webkit-transition: 0.15s ease;
535+
-moz-transition: 0.15s ease;
536+
-ms-transition: 0.15s ease;
537+
-o-transition: 0.15s ease;
538+
transition: all 0.15s ease;
539+
}
540+
div.abstract.hidden {
541+
border: dashed 1px white;
542+
}
543+
div.abstract.hidden.open {
544+
border-color: $grey-color;
545+
}
546+
}
497547
// Rouge Color Customization
498548
code {
499549
color: var(--global-theme-color);

pdfs/CV_NOV30_2020.pdf

-616 KB
Binary file not shown.

pdfs/incluset.pdf

3.73 MB
Binary file not shown.

0 commit comments

Comments
 (0)