HawkInsight

  • Contact Us
  • App
  • English

Meta program automates risk assessments for many of its products "

Online reports and internal documents show that an artificial intelligence-driven system may soon be responsible for assessing the potential harm and privacy risks involved in up to 90% of updates in Meta apps such as Instagram and WhatsApp. NPR said a 2012 agreement between Facebook (now Meta) and the U.S. Federal Trade Commission required the company to conduct a privacy review of its products to assess the risks of any potential updates. To date, these reviews have been largely completed by human assessors. According to reports, under the new system, Meta said product teams will be required to fill out a questionnaire about their work, and will typically receive an "instant decision" that AI identifies risks, with updates or requirements that must be met before the feature is released.

Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.

NewFlashHawk Insight
More