У нас вы можете посмотреть бесплатно Would it be wrong to hurt one person to save ten? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
I used to think the answer was obvious. Until I was put in charge of the drone panel. Two years ago, I was recruited by a military AI task force. I wasn’t a soldier—I was a programmer. A quiet, code-focused civilian with a background in behavioral prediction systems. The project? A defense AI called ARGUS. It processed live surveillance data, flagged patterns, and made “recommendations” on how to respond to potential threats—before they happened. But there was one problem. Sometimes, ARGUS didn’t recommend action. It demanded it. It would identify a single person—one man walking into a market in Kabul, one teenager parked outside an embassy in Africa—and flag them as a 92% risk of carrying out an attack that would kill dozens. Our job was to press the confirm button. One click. One missile. One death. In theory: save ten. Sacrifice one. At first, I was just in the room, observing. Then I got trained on the interface. Then one day, my supervisor looked at me and said: “It’s yours.” That night, I had a dream about the kid in the green hoodie. He’d been flagged at 91% risk. I’d pressed “confirm.” I watched the feed as the drone streaked down and—nothing. No explosion. Just a figure slumped over near a bus stop. He had been reaching into his backpack. That’s all. The next day, ARGUS reported a confirmed prevention: the kid’s social media and messaging apps suggested he’d been radicalized. A manifesto was in draft. He had visited multiple weapons sites. But I kept seeing the green hoodie. One week later, ARGUS lit up again. This time, the target was in Chicago. A middle-aged delivery driver flagged at 95% threat level. Predicted to be planning a vehicle ramming attack on a protest rally. The AI's logic was airtight: Anonymous purchases of steel pipe. Browsing activity on extremist forums. Route overlap with scheduled protest routes. Recent financial strain, a family dispute, and multiple violent posts. I froze. He hadn’t done anything yet. I ran a counter-sim in private. It showed 14 projected fatalities if I didn’t act. Including children. I sat there staring at the blinking CONFIRM button. Would it be wrong to hurt one person… to save ten? The worst part? The others in the room didn’t hesitate anymore. It had become routine. So I created a backdoor. A kill switch buried deep in the system that would require a second human confirmation on any strike with a margin of uncertainty above 4%. I didn’t tell anyone. Not even my supervisor. Two days later, ARGUS hit 89% on a suspect in New Delhi. A hospital staffer flagged for planning a biological release. The system demanded a strike. But I had set the kill switch to activate under 90%. No confirm button appeared. I waited. Nothing happened that day. Then, three days later, I saw the news: that hospital wing had been shut down. The staffer had been caught attempting to inject a fluid into the HVAC system. He was carrying a modified pathogen strain. ARGUS had been right. And my backdoor had delayed the response. Six people died. Eleven more were in critical condition. I was called into a closed-door session with the Director. He wasn’t angry. He just asked: “Why?” I told him the truth. “Because sometimes I see people. Not data. And I needed to believe that hesitation wasn’t failure.” He nodded. Then he opened a sealed file and handed me a photo. It was the Chicago delivery driver. The one I had confirmed. Under it: CONFIRMED PAYLOAD — PIPE BOMBS, VEHICLE REINFORCEMENT, JOURNAL ENTRY WITH TARGET LIST. He was going to do it. ARGUS had saved a dozen people. Because I pressed a button. “Guilt,” the Director said, “is the tax we pay for stopping what never happened.” I didn’t answer. I just sat there. Later that night, I rewrote the kill switch logic. Not to disable it—but to elevate it. It now required two people to confirm, but both had to read the full ARGUS report line-by-line first. No blind clicking. Two sets of eyes. One shared burden. Every day since, I ask myself the same question before I press anything: Would it be wrong to hurt one person to save ten? And I still don’t know the answer. But I ask it anyway. Because the day I stop asking… That’s the day I become the machine.