Friday, January 3, 2025

editable neural networks in health science

 Meng, K., Bau, D., Andonian, A. & Belinkov, Y. Locating and editing factual associations in GPT.

Adv. Neural Inf. Process. Syst. (2022). at

<https://proceedings.neurips.cc/paper_files/paper/2022/hash/6f1d43d5a82a37e89b0665b33bf3a182-

Abstract-Conference.html>

26. Meng, K., Sharma, A. S., Andonian, A., Belinkov, Y. & Bau, D. Mass-editing memory in a

transformer. in International Conference on Learning Representations (arxiv.org, 2023). at

<https://arxiv.org/abs/2210.07229>

27. Mitchell, E., Lin, C., Bosselut, A., Manning, C. D. & Finn, C. Memory-Based Model Editing at

Scale. in Proceedings of the 39th International Conference on Machine Learning (eds. Chaudhuri,

K., Jegelka, S., Song, L., Szepesvari, C., Niu, G. & Sabato, S.) 162, 15817–15831 (PMLR, 17--23

Jul 2022).

28. Hartvigsen, T., Sankaranarayanan, S., Palangi, H., Kim, Y. & Ghassemi, M. Aging with GRACE:

Lifelong Model Editing with Discrete Key-Value Adaptors. in Advances in Neural Information

Processing Systems (2023). at <https://arxiv.org/abs/2211.11031>

29. Mitchell, E., Lin, C., Bosselut, A., Finn, C. & Manning, C. Fast model editing at scale. in

International Conference on Learning Representations (arxiv.org, 2022). at

<https://arxiv.org/abs/2110.11309>

30. Sinitsin, A., Plokhotnyuk, V., Pyrkin, D., Popov, S. & Babenko, A. Editable Neural Networks. in

International Conference on Learning Representations (2020). at <http://arxiv.org/abs/2004.00345>

31. De Cao, N., Aziz, W. & Titov, I. Editing Factual Knowledge in Language Models. in Proceedings of

the 2021 Conference on Empirical Methods in Natural Language Processing 6491–6506

(Association for Computational Linguistics, 2021).

32. Zhong, Z., Wu, Z., Manning, C. D., Potts, C. & Chen, D. MQuAKE: Assessing Knowledge Editing

in Language Models via Multi-Hop Questions. arXiv [cs.CL] (2023). at

<http://arxiv.org/abs/2305.14795>

33. Cohen, R., Biran, E., Yoran, O., Globerson, A. & Geva, M. Evaluating the ripple effects of

knowledge editing in language models. Trans. Assoc. Comput. Linguist. 12, 283–298 (2023).

De Cao, N., Aziz, W. & Titov, I. Editing Factual Knowledge in Language Models. arXiv [cs.CL]

(2021). at <http://arxiv.org/abs/2104.08164>

35. Meng, K., Sharma, A. S., Andonian, A., Belinkov, Y. & Bau, D. Mass-Editing Memory in a

Transformer. arXiv [cs.CL] (2022). at <http://arxiv.org/abs/2210.07229>

36. Mitchell, E., Lin, C., Bosselut, A., Finn, C. & Manning, C. D. Fast Model Editing at Scale. arXiv

[cs.LG] (2021). at <http://arxiv.org/abs/2110.11309>

37. Hartvigsen, T., Sankaranarayanan, S., Palangi, H., Kim, Y. & Ghassemi, M. Aging with GRACE:

Lifelong Model Editing with Key-Value Adaptors. (2022). at

<https://openreview.net/pdf?id=ngCT1EelZk>

Language Models: Problems, Methods, and Opportunities. arXiv [cs.CL] (2023). at

<http://arxiv.org/abs/2305.13172>

41. Hase, P., Hofweber, T., Zhou, X., Stengel-Eskin, E. & Bansal, M. Fundamental problems with model

editing: How should rational belief revision work in LLMs? arXiv [cs.CL] (2024). at

<https://scholar.google.com/citations?view_op=view_citation&hl=en&citation_for_view=FO90FgM

AAAAJ:M3ejUd6NZC8C>

42. Cheng, S., Tian, B., Liu, Q., Chen, X., Wang, Y., Chen, H. & Zhang, N. Can We Edit Multimodal

Large Language Models? in Proceedings of the 2023 Conference on Empirical Methods in Natural

Language Processing (eds. Bouamor, H., Pino, J. & Bali, K.) 13877–13888 (Association for

Computational Linguistics, 2023).



Here are the URLs for the specified papers:


1. **Locating and editing factual associations in GPT**  

   Meng, K., Bau, D., Andonian, A. & Belinkov, Y. (2022).  

   [Link to Paper](https://proceedings.neurips.cc/paper_files/paper/2022/hash/6f1d43d5a82a37e89b0665b33bf3a182-Abstract-Conference.html)


2. **Mass-editing memory in a transformer**  

   Meng, K., Sharma, A. S., Andonian, A., Belinkov, Y. & Bau, D. (2023).  

   [Link to Paper](https://arxiv.org/abs/2210.07229)


3. **Memory-Based Model Editing at Scale**  

   Mitchell, E., Lin, C., Bosselut, A., Manning, C. D. & Finn, C. (2022).  

   [Link to Paper](https://proceedings.mlr.press/v162/mitchell22a.html)


4. **Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors**  

   Hartvigsen, T., Sankaranarayanan, S., Palangi, H., Kim, Y. & Ghassemi, M. (2023).  

   [Link to Paper](https://arxiv.org/abs/2211.11031)


5. **Fast model editing at scale**  

   Mitchell, E., Lin, C., Bosselut, A., Finn, C. & Manning, C. D. (2022).  

   [Link to Paper](https://arxiv.org/abs/2110.11309)


6. **Editable Neural Networks**  

   Sinitsin, A., Plokhotnyuk, V., Pyrkin, D., Popov, S. & Babenko, A. (2020).  

   [Link to Paper](http://arxiv.org/abs/2004.00345)


7. **Editing Factual Knowledge in Language Models**  

   De Cao, N., Aziz, W. & Titov, I. (2021).  

   [Link to Paper](http://arxiv.org/abs/2104.08164)


8. **MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions**  

   Zhong, Z., Wu, Z., Manning, C. D., Potts, C. & Chen, D. (2023).  

   [Link to Paper](http://arxiv.org/abs/2305.14795)


9. **Evaluating the ripple effects of knowledge editing in language models**  

   Cohen, R., Biran, E., Yoran, O., Globerson, A. & Geva, M. (2023).  

   [Link to Paper](https://transacl.org/ojs/index.php/tacl/article/view/3736)


10. **Language Models: Problems, Methods, and Opportunities**  

    (2023).  

    [Link to Paper](http://arxiv.org/abs/2305.13172)


11. **Fundamental problems with model editing: How should rational belief revision work in LLMs?**  

    Hase, P., Hofweber, T., Zhou, X., Stengel-Eskin, E. & Bansal, M. (2024).  

    [Link to Paper](https://scholar.google.com/citations?view_op=view_citation&hl=en&citation_for_view=FO90FgMAAAAAJ:M3ejUd6NZC8C)


12. **Can We Edit Multimodal Large Language Models?**  

    Cheng, S., Tian, B., Liu, Q., Chen, X., Wang, Y., Chen, H. & Zhang, N. (2023).  

    [Link to Paper](https://arxiv.org/abs/2305.14795)


Citations:

[1] https://proceedings.neurips.cc/paper_files/paper/2022/hash/6f1d43d5a82a37e89b


No comments:

Post a Comment