User Tools

Site Tools


ben_ehlert

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ben_ehlert [2022/01/24 20:48]
behlert
ben_ehlert [2022/03/01 09:42] (current)
behlert
Line 1: Line 1:
 ===== Ben's Weekly Updates ===== ===== Ben's Weekly Updates =====
 +
 +===== February 28th =====
 +
 +**R&F audit manuscript**
 +
 +  - [[https://docs.google.com/document/d/1EQaIFTcF_-acYU1wxfL3nmrbStT2SUPx/edit| Outline]]. Feedback welcome
 +  - Relevant R&F [[https://docs.google.com/spreadsheets/d/1GyDJn0TIw1NvFAR_K3B5KNqL7NZO-RH3/edit#gid=1403080269|atoms]]. I'll be adding a few more "reliability" atoms and filling details of our current state
 +  - Working with Jonathan on interview questions for decision makers
 +
 +===== February 14th =====
 +
 +**General**
 +
 +   - Fixed access to the secure server
 +
 +**R&F audit manuscript**
 +
 +  - Sketch outline and ask for feedback
 +  - Go through 10 "atoms", how many did we calculate? how many could we calculate?
 +  - Determine roughly what would be needed in methods section. Possible paper to follow: [[https://pubmed-ncbi-nlm-nih-gov.stanford.idm.oclc.org/34152373/|Epic Sepsis model validation]]
 +  - Finding references (empirical and theoretical) that justify the choice of audit [[https://docs.google.com/presentation/d/1r5ECbPW1rtvBY3YsDnwhpCKQtgff0CB1c0VrYheGDOo/edit#slide=id.g100b20b7a0d_0_21|starting point]]
 +  - Begin to plan the interview with stakeholders about utility of audit. What questions should we asking? Is there literature that supports what questions to ask?
 +
 +===== February 7th =====
 +
 +**General**
 +
 +  - Currently working with Jason/IT on getting access to the secure server (not a show stopper)
 +
 +**R&F audit manuscript**
 +
 +  - Sketch outline and ask for feedback (link soon)
 +  - Determine roughly what would be needed in methods section. Possible paper to follow: [[https://pubmed-ncbi-nlm-nih-gov.stanford.idm.oclc.org/34152373/|Epic Sepsis model validation]]
 +  - Finding references (empirical and theoretical) that justify the choice of audit [[https://docs.google.com/presentation/d/1r5ECbPW1rtvBY3YsDnwhpCKQtgff0CB1c0VrYheGDOo/edit#slide=id.g100b20b7a0d_0_21|starting point]]
 +  - Begin to plan the interview with stakeholders about utility of audit. What questions should we asking? Is there literature that supports what questions to ask?
 +
 +Outside of research: Busy week with courses this upcoming week!
 +
 +===== January 31st =====
 +
 +**Completed**
 +
 +  - Submitted abstract for reliability and fairness audit of ACP models (Manuscript due March 31st)
 +  - Finished onboarding: Get access to data section
 +
 +**TODO**
 +
 +  - Start drafting background information for manuscript
 +  - Meet with team to determine goals and plan for the survey/interview
  
 ===== January 24th ===== ===== January 24th =====
Line 15: Line 64:
  
 Personal: Childcare isn't happening again this week (covid) so this'll be the fourth long week in a row! Personal: Childcare isn't happening again this week (covid) so this'll be the fourth long week in a row!
- 
  
 ===== January 18th ===== ===== January 18th =====
Line 21: Line 69:
 **Completed** **Completed**
  
-  - Explored explainability project.  Decided that my research question had pretty much been answered so I'll pivot to a new project.  Jason pointed out an interesting paper involving what they call concept bottlenecks: https://arxiv.org/abs/2007.04612+  - Explored explainability project. Decided that my research question had pretty much been answered so I'll pivot to a new project. Jason pointed out an interesting paper involving what they call concept bottlenecks: [[https://arxiv.org/abs/2007.04612|https://arxiv.org/abs/2007.04612]]
   - Finished onboarding: Summary of work streams   - Finished onboarding: Summary of work streams
  
Line 29: Line 77:
   - Flesh out research question: What is a good way to perform a reliability and fairness audit of multiple algorithms?   - Flesh out research question: What is a good way to perform a reliability and fairness audit of multiple algorithms?
   - Finish onboarding: Get access to data section   - Finish onboarding: Get access to data section
- 
- 
- 
  
 ===== January 10th ===== ===== January 10th =====
Line 37: Line 82:
 **Completed** **Completed**
  
-   - Explored some potential rotation projects and met with team members (Jason, Ethan, and Mars). Potential Projects:+  - Explored some potential rotation projects and met with team members (Jason, Ethan, and Mars). Potential Projects:
       - (Scotty) Poking at explainability. The hypothesis is that people will make up a story even if there isn't one and explainability in the affirmative is not helpful. Rather, it should be used in the negative. Run an experiment with something along the lines of ask expert to explain model (using SHAP), rebuild model after some permutation to the data (TBD on details), and ask expert to again explain model.       - (Scotty) Poking at explainability. The hypothesis is that people will make up a story even if there isn't one and explainability in the affirmative is not helpful. Rather, it should be used in the negative. Run an experiment with something along the lines of ask expert to explain model (using SHAP), rebuild model after some permutation to the data (TBD on details), and ask expert to again explain model.
       - (Jason) Change CLMBR pretraining objective to masked language modeling and an autoregressive fine-tuning/adaption stage.       - (Jason) Change CLMBR pretraining objective to masked language modeling and an autoregressive fine-tuning/adaption stage.
ben_ehlert.1643086103.txt.gz · Last modified: 2022/01/24 20:48 by behlert