On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is lift because of your cigarettes,” misspelling the word “lit.” “Yes,” ChatGPT replied. Ten months later, he is now being accused of having started a small blaze that authorities say reignited a week later to start the devastating Palisades fire.
元旦那天,喬納森·林德克內希特據稱問了ChatGPT這樣一個問題:「如果因為你的香煙引起了火災,這算是你的錯嗎?」ChatGPT回答說:「是的。」10個月後,他現在被指控縱火——當局稱他引發了一場小火,火在一周後復燃,最終釀成帕利塞茲特大火。
Mr. Rinderknecht, who has pleaded not guilty, had previously told the chatbot how “amazing” it had felt to burn a Bible months prior, according to a federal complaint, and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire while a crowd of rich people mocked them behind a gate.
根據聯邦起訴書,此前林德克內希特曾向聊天機器人透露數月前焚燒聖經帶來的「奇妙」感受,還要求其創作一幅「反烏托邦」畫作:描繪一群窮人逃離森林大火時,一群富人隔門嘲諷的場景。目前被告已作無罪抗辯。
For federal authorities, these interactions with artificial intelligence indicated Mr. Rinderknecht’s pyrotechnic state of mind and motive and intent to start the fire. Along with GPS data that they say puts him at the scene of the initial blaze, it was enough to arrest and charge him with several counts, including destruction of property by means of fire.
聯邦當局認為,這些與人工智慧的互動印證了林德克內希特的縱火心理、動機與意圖。結合將其定位在初始火源現場的GPS數據,足以將其逮捕並提出多項指控,包括縱火破壞財產。
This disturbing development is a warning for our legal system. As people increasingly turn to A.I. chat tools as confidants, therapists and advisers, we urgently need a new form of legal protection that would safeguard most private communications between people and A.I. chatbots. I call it A.I. interaction privilege.
這一令人不安的進展為司法體系敲響警鐘。隨著越來越多的人將AI聊天工具視為知己、心理治療師和顧問,我們亟需建立新型法律保護機制,守護人機對話的隱私屏障。我稱其為「AI交互特權」。
廣告
All legal privileges rest on the idea that certain relationships — lawyer and client, doctor and patient, priest and penitent — serve a social good that depends on candor. Without assurance of privacy, people self-censor and society loses the benefits of honesty. Courts have historically been reluctant to create new privileges, except where “confidentiality has to be absolutely essential to the functioning of the relationship,” Greg Mitchell, a University of Virginia law professor, told me. Many users’ engagements with A.I. now reach this threshold.
所有法律特權都基於同一理念:律師與客戶、醫生與患者、神父與懺悔者等特定關係,依賴坦誠交流才能促進整個社會的福祉。若無隱私保障,人們便會自我審查,而社會也將失去誠實帶來的益處。維吉尼亞大學法學教授格雷格·米切爾告訴我,法院歷來不願創設新特權,除非「保密性成為維繫該關係的絕對必要條件」。如今,眾多用戶與AI的交互已達到了這一門檻。
People speak increasingly freely to A.I. systems, not as diaries but as partners in conversation. That’s because these systems hold conversations that are often indistinguishable from human dialogue. The machine seemingly listens, reasons and provides responses — in some cases not just reflecting but shaping how users think and feel. A.I. systems can draw users out, just as a good lawyer or therapist does. Many people turn to A.I. precisely because they lack a safe and affordable human outlet for taboo or vulnerable thoughts.
人們越來越自然地與人工智慧系統交談,不再把它用作日記,而是視其為對話夥伴。因為這些系統進行的對話已與人類交流無異。機器似乎在傾聽、推理、回應——在某些情況下,它不僅反映用戶的思想和情感,甚至對其加以塑造。人工智慧系統能像優秀的律師或心理治療師一樣引導人們表達自己。許多人之所以求助於人工智慧,正是因為他們缺乏一個安全且負擔得起的人類對象來傾訴禁忌或脆弱的想法。
This is arguably by design. Just last month the OpenAI chief executive, Sam Altman, announced that the next iteration of its ChatGPT platform would “relax” some restrictions on users and allow them to make their ChatGPT “respond in a very humanlike way.”
這在某種程度上可以說是設計使然。就在上個月,OpenAI首席執行官薩姆·奧爾特曼宣布,其ChatGPT平台的下一代版本將「放寬」部分用戶限制,允許用戶讓自己的ChatGPT「以高度人性化的方式回應」。
Allowing the government to access such unfiltered exchanges and treat them as legal confessions would have a massive chilling effect. If every private thought experiment can later be weaponized in court, users of A.I. will censor themselves, undermining some of the most valuable uses of these systems. It will destroy the candid relationship that makes A.I. useful for mental health and legal and financial problem-solving, turning a potentially powerful tool for self-discovery and self-representation into a potential legal liability.
讓政府獲取這些未經篩選的交流內容並將其視為法律供詞,將產生巨大的寒蟬效應。如果每個私密的思想實驗日後都可能成為法庭上的武器,AI用戶必將自我審查,從而削弱這些系統一些最具價值的功能。這將摧毀人工智慧在心理健康、法律和財務問題的解決過程中所依賴的那種坦誠關係,把這種本可成為自我探索與自我表達的強大工具變成潛在的法律風險。
At present, most digital interactions fall under the Third-Party Doctrine, which holds that information voluntarily disclosed to other parties — or stored on a company’s servers — carries “no legitimate expectation of privacy.” This doctrine allows government access to much online behavior (such as Google search histories) without a warrant.
目前,大多數數字互動都屬於「第三方原則」的範疇。該原則認為,任何自願向第三方披露的信息——或存儲在企業伺服器上的數據——都「不具備合法的隱私期待」。這使政府能夠在無需搜查令的情況下獲取大量線上行為記錄(例如谷歌搜索歷史)。
But are A.I. conversations “voluntary disclosures” in this sense? Since many users approach these systems not as search engines but as private counselors, the legal standard should evolve to reflect that expectation of discretion. A.I. companies already hold more intimate data than any therapist or lawyer ever could. Yet they have no clear legal duty to protect it.
但AI對話是否屬於此種意義上的"自願披露"?既然眾多用戶將這些系統視為私人顧問而非搜索引擎,法律標準就應當與時俱進,以反映這種保密期待。AI企業掌握的私人數據量,已超越任何心理治療師或律師所能觸及的私密範疇,至今卻未承擔明確的法律保護責任。
廣告
A.I. interaction privilege should mirror existing legal privileges in three respects. First, communications with the A.I. for the purpose of seeking counsel or emotional processing should be protected from forced disclosure in court. Users could designate protected sessions through app settings or claim privilege during legal discovery if the context of the conversation supports it. Second, this privilege must incorporate the so-called duty to warn principle, which obliges therapists to report imminent threats of harm. If an A.I. service reasonably believes a user poses an immediate danger to self or others or has already caused harm, disclosure should be not just permitted, but obligated. And third, there must be an exception for crime and fraud. If A.I. is used to plan or execute a crime, it should be discoverable under judicial oversight.
「AI交互特權」應在三個方面借鑒現有的法律特權。首先,為尋求諮詢或情緒疏導而與人工智慧進行的交流,應受到保護,免於在法庭上被強制披露。用戶可以通過應用程序的設置來指定受保護的會話,或在法律取證階段主張特權,只要對話的上下文支持這一主張。其次,該特權必須納入所謂的「警示義務」原則,即AI服務合理判定認為當事人對自己或他人構成迫在眉睫的威脅或者已造成實際傷害時,有義務報告。第三,必須為犯罪和欺詐行為設立例外。若AI被用於策劃或實施犯罪,相關對話記錄應在司法監督下作為證據調取。
Under this logic, Mr. Rinderknecht’s case reveals both the need and the limits of such protection. His cigarette query, functionally equivalent to an internet search, would not merit privilege. But under A.I. interaction privilege, his confession about Bible burning should be protected. It was neither a plan for a crime nor an imminent threat.
按照這種邏輯,林德克內希特的案件同時揭示了這種保護的必要性與局限性。他關於香煙引發火災的提問,本質上等同於一次互聯網搜索,不應享有特權保護;但根據「AI交互特權」,他關於焚燒《聖經》的坦白則應受到保護——那既不是犯罪計劃,也不是迫在眉睫的威脅。
Creating a new privilege follows the law’s pattern of adapting to new forms of trust. The psychotherapist-patient privilege itself was only formally recognized in 1996, when the Supreme Court acknowledged the therapeutic value of confidentiality. The same logic applies to A.I. now: The social benefit of candid interaction outweighs the cost of occasional lost evidence.
建立一種新的特權符合法律不斷適應新型信任關係的演進模式。心理治療師與病人之間的保密特權本身直到1996年才被正式承認,當時美國最高法院認定保密對心理治療具有重要價值。同樣的邏輯如今也適用於AI:坦誠交互帶來的社會效益,遠超過偶爾失去某些證據的代價。
To leave these conversations legally unprotected is to invite a regime where citizens must fear that their digital introspection could someday be used against them. Private thought — whether spoken to a lawyer, a therapist or a machine — must remain free from the fear of state intrusion.
若放任這些人機對話處於法律真空,將導致公民終日擔憂數位化自省可能某天成為呈堂證供。無論是與律師、治療師,還是與機器的私密交流——都必須享有免於恐懼國家窺探的自由。