Salesforce has unveiled its Artificial Intelligence Acceptable Use Policy, outlining rules to govern how its AI products should and should not be used.
Governments and companies have been wrestling with the legality of various scenarios in which AI is used. Unfortunately, while there has been much debate — and a few lawsuits — there has been little consensus, let alone meaningful regulation.
Salesforce is taking it upon itself to outline rules to govern the use of its AI products, consulting with its Ethical Use Advisory Council subcommittee to develop common sense rules, according to Paula Goldman, Chief Ethical and Humane Use Officer:
It’s not enough to deliver the technological capabilities of generative AI, we must prioritize responsible innovation to help guide how this transformative technology can and should be used. Salesforce’s AI AUP will be central to our business strategy moving forward, which is why we took time to consult with our Ethical Use Advisory Council subcommittee, partners, industry leaders, and developers prior to its release. In doing so, we aim to empower responsible innovation and protect the people who trust our products as they are developed.
Goldman emphasized the need to make sure that important ethical concerns are not overlooked, in the rush to bring something to market:
As businesses race to bring this technology to market, it’s critical that they do so inclusively and intentionally.
Salesforce should be commended for taking the initiative to release its Artificial Intelligence Acceptable Use Policy, something that more companies will hopefully do.