Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 264179e

Browse files
committed
llm notes refactor
1 parent 9943b81 commit 264179e

File tree

10 files changed

+1298
-1388
lines changed

10 files changed

+1298
-1388
lines changed

‎_includes/01_research.html

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ <h2 style="text-align: center; margin-top: -150px;"> Research</h2>
2525
interpretability</a>.
2626
<br>
2727
<br>
28-
<a href="https://www.nature.com/articles/s41467-023-43713-1">augmented imodels</a> - use LLMs to build a
29-
transparent model<br>
28+
<a href="https://www.nature.com/articles/s41467-023-43713-1">augmented imodels</a> - build a
29+
transparent model using LLMs<br>
3030
<!-- <a href="https://arxiv.org/abs/2310.14034">tree prompting</a> - improve black-box few-shot text classification -->
3131
<!-- with decision trees<br> -->
3232
<a href="https://arxiv.org/abs/2311.02262">attention steering</a> - mechanistically guide LLMs by
@@ -62,10 +62,10 @@ <h2 style="text-align: center; margin-top: -150px;"> Research</h2>
6262
<!-- href="https://www.cs.utexas.edu/~huth/index.html">Huth lab</a> at UT Austin). -->
6363
<br>
6464
<br>
65-
<a href="https://arxiv.org/abs/2410.00812">explanation-mediated validation</a> - test fMRI
66-
explanations using LLM-generated stimuli<br>
67-
<a href="https://arxiv.org/abs/2405.16714">qa embeddings</a> - predict fMRI language responses by
68-
asking yes/no questions to LLMs<br>
65+
<a href="https://arxiv.org/abs/2410.00812">explanation-mediated validation</a> - causally test fMRI
66+
explanations with LLM-generated stimuli<br>
67+
<a href="https://arxiv.org/abs/2405.16714">qa encoding models</a> - model fMRI language responses to verbal
68+
theories using LLM annotations<br>
6969
<a href="https://arxiv.org/abs/2305.09863">summarize &amp; score explanations</a> - generate natural-language
7070
explanations of fMRI encoding models<br>
7171
</div>
@@ -82,12 +82,12 @@ <h2 style="text-align: center; margin-top: -150px;"> Research</h2>
8282
<br>
8383
<a href="https://arxiv.org/pdf/2201.11931">greedy tree sums</a> - build accurate, compact tree-based clinical
8484
models<br>
85-
<a href="https://arxiv.org/abs/2306.00024">clinical self-verification</a> - self-verification improves
86-
performance and interpretability of clinical information extraction<br>
85+
<a href="https://arxiv.org/abs/2306.00024">clinical self-verification</a> - improve LLM-based clinical
86+
information extraction with self-verirication<br>
8787
<a href="https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000076">clinical rule
88-
vetting</a> - stress testing a clinical decision instrument performance for intra-abdominal injury<br>
88+
vetting</a> - test a clinical decision instrument for evaluating intra-abdominal injury<br>
8989
<a href="https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000076">clinical rule
90-
bias assessment</a> - evaluating bias in the development of popular clinical decision instruments<br>
90+
bias assessment</a> - evaluate biases in the development of popular clinical decision instruments<br>
9191
</div>
9292

9393
<div style="width: 100%;padding: 8px;margin-bottom: 20px; text-align:center; font-size: large;">

‎_notes/ai/llms.md

Lines changed: 294 additions & 0 deletions
Large diffs are not rendered by default.
209 KB
Loading[フレーム]

‎_notes/assets/fido.png

-143 KB
Binary file not shown.

‎_notes/assets/interp_eval_table.png

-535 KB
Binary file not shown.

‎_notes/assets/prompting_hierarchy.png

-95 KB
Loading[フレーム]

‎_notes/neuro/comp_neuro.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1122,10 +1122,14 @@ subtitle: Diverse notes on various topics in computational neuro, data-driven ne
11221122
- NeuroQuery, comprehensive meta-analysis of human brain mapping ([dockes, poldrack, ..., yarkonig, suchanek, thirion, & varoquax](https://elifesciences.org/articles/53385)) [[website](https://neuroquery.org/query?text=checkerboard)]
11231123
- train on keywords to directly predict weights for each query-expanded keyword and the produce linearly combined brainmap
11241124

1125-
## speech
1125+
## speech / ECoG
11261126

11271127
- Improving semantic understanding in speech language models via brain-tuning ([moussa, klakow, & toneva, 2024](https://arxiv.org/abs/2410.09230))
1128-
- BrainWavLM: Fine-tuning Speech Representations with Brain Responses to Language ([vattikonda, vaidya, antonello, & huth, 2025](https://arxiv.org/abs/2502.08866))
1128+
- BrainWavLM: Fine-tuning Speech Representations with Brain Responses to Language ([vattikonda, vaidya, antonello, & huth, 2025](https://arxiv.org/abs/2502.08866))
1129+
1130+
- A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations ([zada...hasson, 2024](https://www.cell.com/neuron/fulltext/S0896-6273(24)00460-4))
1131+
- previous inter-subject correlation analyses directly map between speaker’s brain activity & listener’s brain activity during communication
1132+
- this work adds a semantic feature space to predict speaker/listener activity & partitions predicting the other person’s brain activity from these
11291133

11301134

11311135
# advanced topics

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /