Apple’s plan to find CSAM should have centered around scanning images on iCloud servers, not on users’ devices, where there is a greater expectation of privacy 中古鏡頭收購

中古鏡頭收購 or any successor to the CyberTipline operated by NCMEC.

There is no escaping this responsibility when and if CSAM is discovered:

(e)Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined—

(1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and
(2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

What is not required is that companies actively seek out CSAM on their services:

(f)Protection of Privacy.—Nothing in this section shall be construed to require a provider to—

(1) monitor any user, subscriber, or customer of that provider;
(2) monitor the content of any communication of any person described in paragraph (1); or
(3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).

These two provisions get at why Facebook and Apple’s reported numbers have historically been so different: it’s not because there is somehow more CSAM on Facebook than exists on Apple devices, but rather that Facebook is scanning all of the images sent to and over its service, while Apple is not looking at what is in your phone, or on their cloud. From there the numbers make much more sense: Facebook is reporting what it finds, while Apple is, as the title of Section (3) suggests, protecting privacy and simply not looking at images at all.
Apple Protects Children
Last week Apple put up a special page on their website entitled Expanded Protections for Children:

At Apple, our goal is to create technology that empowers people and enriches their lives — while helping them stay safe. We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM).
Apple is introducing new child safety features in three areas, developed in collaboration with child safety experts. First, new communication tools will enable parents to play a more informed role in helping their children navigate communication online. The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
Finally, updates to Siri and Search provide parents and children expanded information and help if they encounter unsafe situations. Siri and Search will also intervene when users try to search for CSAM-related topics.

John Gruber at Daring Fireball has a good overview of what are in fact three very different initiatives; what unites, them, though, and continues to differentiate Apple’s approach from Facebook’s, is that Apple is scanning content on your device, while Facebook is doing it in the cloud. Apple emphasized repeatedly that this ensured that Apple does not get access to your content. From the “Communications Safety in Messages” section:

Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.

From the “CSAM Detection” section:

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations…This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.

There are three ways to think about Apple’s approach, both in isolation and relative to a service like Facebook:2 the idealized outcome, the worst case outcome, and the likely driver.
Capability Versus Policy
Apple’s idealized outcome solves a lot of seemingly intractable problems. On one hand, CSAM is horrific and Apple hasn’t been doing anything about it; on the other hand, the company has a longstanding commitment to ever increasing amounts of encryption, ideally end-to-end. Apple’s system, if it works precisely as designed, preserves both goals: the company can not only keep end-to-end encryption in Messages, but also add it to iCloud Photos (which is not currently encrypted end-to-end), secure in the knowledge that it is doing its part to not only report CSAM but also help parents look after their children. And, from a business perspective, it means that Apple can continue to not make the massive investments that companies like Facebook have in trust-and-safety teams; the algorithm will take care of it.
That, of course, is the rub: Apple controls the algorithm, both in terms of what it looks for and what bugs it may or may not have, as well as the input, which in the case of CSAM scanning is the database from NCMEC. Apple has certainly worked hard to be a company that users trust, but we already know that that trust doesn’t extend everywhere: Apple has, under Chinese government pressure, put Chinese user iCloud data on state-owned enterprise servers, along with the encryption keys necessary to access it. What happens when China announces its version of the NCMEC, which not only includes the horrific imagery Apple’s system is meant to capture, but also images and memes the government deems illegal?
The fundamental issue — and the first reason why I think Apple made a mistake here — is that there is a meaningful difference between capability and policy. One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments. In this case, though, Apple is building the capability, and the only thing holding the company back is policy.
Then again, Apple’s policy isn’t the only one that matters: both the UK and the EU are moving forward on bills that mandate online service companies proactively look for and report CSAM. Indeed, I wouldn’t be surprised if this were the most important factor behind Apple’s move: the company doesn’t want to give up on end-to-end encryption — and likely wants to expand it — which leaves on-device scanning as the only way to satisfy governments not (just) in China but also the West.
Cloud Versus Device
I think that there is another solution to Apple’s conundrum; what is frustrating from my perspective is that I think the company is already mostly there. Consider the status quo: back in 2020 Reuters reported that Apple decided to not encrypt iCloud backups at the FBI’s request:

Apple Inc. dropped plans to let iPhone users fully encrypt backups of their devices in the company’s iCloud service after the FBI complained that the move would harm investigations, six sources familiar with the matter told Reuters. The tech giant’s reversal, about two years ago, has not previously been reported. It shows how much Apple has been willing to help U.S. law enforcement and intelligence agencies, despite taking a harder line in high-profile legal disputes with the government and casting itself as a defender of its customers’ information.

This has a number of significant implications for Apple’s security claims, and is why earlier this year I ranked iMessage as being less secure than Signal, WhatsApp, Telegram, and Facebook Messenger:

iMessage encrypts messages end-to-end by default; however, if you have iCloud backup turned on, your messages can be accessed by Apple (who has the keys for iCloud backups) and, by extension, law enforcement with a warrant. Unlike WhatsApp, though, this is both on by default and cannot be turned off on a granular basis.

This caveat applies to almost everything on your iPhone: if you give in to the never-ending prompts to sign-in to iCloud and its on-by-default backup solution, your data is accessible to Apple and, by extension, law enforcement with a warrant. I actually think this is reasonable! I wrote this when that Reuters report came out:

Go back to what I said above: determined actors will have access to encryption and facial recognition. Anyone trying to argue whether or not these technologies should exist is not living in reality. It follows then, that we should take care to ensure that good actors have access to these technologies too. That means not making them illegal.
Second, though, legitimate societal concerns about the needs of law enforcement and the radicalizing nature of the Internet should be taken seriously. That means we should think very carefully about making encryption the default…This also splits the difference when it comes to principles: users have agency — they can ensure that everything they do is encrypted — while total privacy is available but not given by default.
I actually think that Apple does an excellent job of striking that balance today. When it comes to the iPhone itself, Apple is the only entity that can make it truly secure; no individual can build their own secure enclave that sits at the root of iPhone security. Therefore, they are right to do so: everyone has access to encryption.
From there it is possible to build a fully secure environment: use only encrypted communications, use encrypted backups to a computer secured by its own hardware-based authentication scheme, etc. Taking the slightly easier route, though — iCloud backups, Facebook messaging, etc. — means some degree of vulnerability tha

中古鏡頭收購(圖/翻攝中古鏡頭收購canon官網)

中古鏡頭收購canon 正式發表新一代機皇 EOS R1 單眼相機,歷代最強的對焦系統就是要為了奧運舞台而生,不只有每秒 40 張的高速連拍,還能透過 AI 提前預測運動員的動作、球的位置,讓相機緊緊鎖定拍攝目標。

本次 中古鏡頭收購canon 一共發表兩款機型,有最高階的 EOS R1 以及中高階的 EOS R5 Mark II,均引入新一代的 AI 對焦技術「雙像素智能自動對焦系統」,可以啟動「動作優先」模式,會偵測人體的關節、球的位置,只要是在足球、籃球與排球等特定運動,當球員準備進行射門等指定動作,相機可以透過演算法自動轉移焦點至主體上,確保用戶一定能捕捉到最精彩的畫面。

這次還加入了「註冊人物優先」,倘若有想要拍攝的明星球員或是特定選手,就能在相機內預先註冊 10 張不同人物的臉孔,拍攝過程之中,相機就會以該名人物為優先的偵測對焦目標。同時,中古鏡頭收購canon 也下放了 EOS R3 的「眼球控制自動對焦」功能,透過在關景窗內移動視線,就能立即改變對焦位置。

EOS R1 是瞄準運動攝影、新聞攝影、婚攝的專業相機,強調高速、高感光及高影像品質三者兼具,搭載 2,420 萬像素全片幅背照堆疊式 CMOS 影像感應器,內建「相機內放大」功能,可以在拍攝之後提供 9600 萬畫素的照片,提供後製裁切更多空間。另有每秒 40 張連拍速度,畫面中央提供最高 8.5 級的防震,周邊則有 7.5 級,還有 100% 的自動對焦覆蓋範,錄影最高規格達到無裁切 6K DCI 60p 12-bit RAW 格式。

同場亮相的 EOS R5 Mark II,則使用 4,500 萬像素全片幅背照堆疊式 CMOS 影像感應器,「相機內放大」功能最多擴大至 1 億 7,800 萬像素,有每秒 30 張的連拍速度,與 EOS R1 同級的防震系統,可錄製無裁切 8K DCI 60p 12-bit Light RAW 影片。

目前台灣 中古鏡頭收購canon 尚未公布上市資訊,作為參考,EOS R1 北美售價是 6,299 美元(約 20.5 萬元),EOS R5 Mark II 則為 4,299 元(約 14 萬元)。

中古鏡頭收購 中古鏡頭收購

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *