, but this code // executes before the first paint, when

ÃÛÌÒapp

is not yet present. The // classes are added to so styling immediately reflects the current // toolbar state. The classes are removed after the toolbar completes // initialization. const classesToAdd = ['toolbar-loading', 'toolbar-anti-flicker']; if (toolbarState) { const { orientation, hasActiveTab, isFixed, activeTray, activeTabId, isOriented, userButtonMinWidth } = toolbarState; classesToAdd.push( orientation ? `toolbar-` + orientation + `` : 'toolbar-horizontal', ); if (hasActiveTab !== false) { classesToAdd.push('toolbar-tray-open'); } if (isFixed) { classesToAdd.push('toolbar-fixed'); } if (isOriented) { classesToAdd.push('toolbar-oriented'); } if (activeTray) { // These styles are added so the active tab/tray styles are present // immediately instead of "flickering" on as the toolbar initializes. In // instances where a tray is lazy loaded, these styles facilitate the // lazy loaded tray appearing gracefully and without reflow. const styleContent = ` .toolbar-loading #` + activeTabId + ` { background-image: linear-gradient(rgba(255, 255, 255, 0.25) 20%, transparent 200%); } .toolbar-loading #` + activeTabId + `-tray { display: block; box-shadow: -1px 0 5px 2px rgb(0 0 0 / 33%); border-right: 1px solid #aaa; background-color: #f5f5f5; z-index: 0; } .toolbar-loading.toolbar-vertical.toolbar-tray-open #` + activeTabId + `-tray { width: 15rem; height: 100vh; } .toolbar-loading.toolbar-horizontal :not(#` + activeTray + `) > .toolbar-lining {opacity: 0}`; const style = document.createElement('style'); style.textContent = styleContent; style.setAttribute('data-toolbar-anti-flicker-loading', true); document.querySelector('head').appendChild(style); if (userButtonMinWidth) { const userButtonStyle = document.createElement('style'); userButtonStyle.textContent = `#toolbar-item-user {min-width: ` + userButtonMinWidth +`px;}` document.querySelector('head').appendChild(userButtonStyle); } } } document.querySelector('html').classList.add(...classesToAdd); })(); Kenneth Arnold | ÃÛÌÒapp

ÃÛÌÒapp

Skip to main content

Mr. Kenneth Arnold

Assistant Professor

Biography

I’m interested in research and teaching at the intersection of data science, people, and Christian faith. My main projects are around AI for Everyday Creativity, including:

  • Applying today’s large AI language models to help writers express their ideas in their own words
  • AI for instructors to see what students are learning
  • Helping everyone understand what AI can do, and accurately calibrating both our excitement and our concerns

Education

  • B.S. in Electrical and Computer Engineering, Cornell University, 2007
  • S.M. in Media Arts and Sciences, MIT Media Lab, 2010
  • Ph.D. in Computer Science, Harvard University, 2020

Professional Experience

  • Microsoft Research New England, Cambridge, MA
    Research Internship Fall 2015
  • Luminoso, Cambridge, MA
    Co-founder, Researcher, Developer 2011 and Summer 2013
  • MIT Media Lab, Cambridge, MA
    Research Assistant August 2007–August 2011
  • IBM, Austin, TX
    Extreme Blue Intern Summer 2006
  • NASA Goddard Space Flight Center, Greenbelt, MD
    Nonlinear Signal Analysis Research Programmer Summer 2003 and 2004

Awards

Patents
  • From my internship at Microsoft Research:
    Patents Interactive context-based text completions. Kenneth C. Arnold, Kai-Wei Chang, Adam Tauman Kalai. (US20180101599A1, pending).

  • From my internship at IBM (all list inventors as: Jacob C. Albertson, Kenneth C. Arnold, Steven D. Goldman, Michael A. Paolini, Anthony J. Sessa):
    • Controlling resource access based on user gesturing in a 3D captured image stream of the user. (US7971156 issued Jun, 28 2011).
    • Informing a user of gestures made by others out of the user’s line of sight. (US7725547 issued May, 25 2010).
    • Tracking a range of body movement based on 3D captured image streams of a user. (US7840031 issued Nov, 23 2010).
    • Warning a vehicle operator of unsafe operation behavior based on a 3D captured image stream. (US7792328 issued Sep, 7 2010).
    • Controlling a document based on user behavioral signals detected from a 3D captured image stream. (US7877706 issued Jan 25, 2011).
    • Controlling a system based on user behavioral signals detected from a 3D captured image stream. (US7801332 issued Sep, 21 2010).
    • Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream. (US8269834 issued Sep 18, 2012).
    • Adjusting a consumer experience based on a 3D captured image stream of a consumer response. (US8295542 issued Oct 23, 2012)