Posts tagged Big Brother
Posts tagged Big Brother
Rachel Maddow; A voice FOR Big Brother
Why do people even pay attention to her any more?
Just imagine her “reporting” on what Winston Smith went through in 1984;
RACHEL MADDOW: “[…] Smith was then escorted to the 101 room where he underwent some interrogation methods. These methods involved rats, harmless furry, adorable rats that Smith had some irrational fear of.
Irrational fear is the problem here and the Ministry Of Truth needed to let Mr. Smith know that his fears, skepticism and outright conspiracy theories in regards to Big Brother were irrational.
We can’t allow people like Mr. Smith going around making unwarranted claims against Big Brother, I mean has Big Brother ever harmed you? There are no proof to this or any of Mr. Smith’s allegations. His anxieties, his troubles with Big Brother were and continue to be unwarranted.
But lets get to the heart of the matter; what Mr. Smith was really doing was sewing the seeds of doubt against the strength of Big Brother, these doubts could have strengthened our enemies - giving them the confidence to fight harder. As you can see, this is problematic. The Ministry of Truth had to step in, they don’t want any harm brought to any of you and if Mr. Smith persisted with his doubts, that very well would have happened.
What if WInston Smith had spread his doubts to our enemies? Perhaps they could have recruited him to share information about Big Brother that they could have exploited! Big Brother has never done anything wrong - and yet we have this man developing a portrayal of Big Brother that is contrary to all we know about Big Brother. Winston Smith was rewriting Big Brothers history based solely on his own delusions and conspiracy theories.
Folks, this is why we have the Ministry of Truth - to protect ourselves from characters like this Winston Smith.
It’s time for a commercial break, when we come back we will be discussing a new miracle drug that will cure ALL of our ailments. Big Brother calls it “SOMA”. What is it and when will it be available? All that after the break.”
Side Note: It won’t be long before the DHS and Pentagon use this “oops! I didn’t mean to” excuse when they start killing Americans using these same drones they are accidentally spying on Americans with.
As long as the Air Force pinky-swears it didn’t mean to, its drone fleet can keep tabs on the movements of Americans, far from the battlefields of Afghanistan, Pakistan or Yemen. And it can hold data on them for 90 days — studying it to see if the people it accidentally spied upon are actually legitimate targets of domestic surveillance.
The Air Force, like the rest of the military and the CIA, isn’t supposed to conduct “nonconsensual surveillance” on Americans domestically, according to an Apr. 23 instruction from the flying service. But should the drones taking off over American soil accidentally keep their cameras rolling and their sensors engaged, well … that’s a different story.
“Collected imagery may incidentally include US persons or private property without consent,” reads the instruction (.pdf), unearthed by the secrecy scholar Steven Aftergood of the Federation of American Scientists. That kind of “incidental” spying won’t be immediately purged, however. The Air Force has “a period not to exceed 90 days” to get rid of it — while it determines “whether that information may be collected under the provisions” of a Pentagon directive that authorizes limited domestic spying.
In other words, if an Air Force drone accidentally spies on an American citizen, the Air Force will have three months to figure out if it was legally allowed to put that person under surveillance in the first place.
Not all domestic drone surveillance is that ominous. “Air Force components may, at times, require newly collected or archived domestic imagery to perform certain missions,” the Air Force concluded. Acceptable surveillance includes flying drones over natural disasters; studying environmental changes; or keeping tabs above a domestic military base. Even those missions, however, raise “policy and legal concerns that require careful consideration, analysis and coordination with legal counsel.”
The potential trouble with those local intelligence missions is once the drones’ powerful sensors and cameras sweep up imagery and other data from Americans nearby, the Air Force won’t simply erase the tapes. It’ll start analyzing whether the people it’s recorded are, among other things, “persons or organizations reasonably believed to be engaged or about to engage, in international terrorist or international narcotics activities.” Suddenly, accidental spying provides an entrance point into deliberate investigations, all done without a warrant.
And it doesn’t stop with the Air Force. “U.S. person information in the possession of an Air Force intelligence component may be disseminated pursuant to law, a court order,” or the Pentagon directive that governs acceptable domestic surveillance. So what begins as a drone flight over, say, a national park to spot forest fires could end up with a dossier on campers getting passed on to law enforcement.
All this is sure to spark a greater debate about the use of drones and other military surveillance migrating from the warzones of Iraq and Afghanistan back home. The Department of Homeland Security — which is lukewarm on its fleet of spy drones — is expanding its use of powerful, military-grade camera systems. And police departments across the country are beginning to buy and fly drones from the military. Now the Air Force’s powerful spy tools could creep into your backyard in a different way.
There’s an irony here. The directive is actually designed to make sure that Air Force personnel involved in surveillance don’t start spying on their fellow citizens. It instructs that “Questionable Intelligence Activities … that may violate the law, any executive order or Presidential directive” have to be reported immediately up the chain of command. But what’s most questionable might be the kind of local spying the Air Force considers legit.
Tom Cruise made “pre-crime’” a futuresque and controversial method of law enforcement in the 2002 movie Minority Report.
Ten years later, the idea of preemptively identifying a criminal — particularly an inside threat — is taking shape within the U.S. Defense Department, reports Joe Gould at Army Times.
Whether it’s a low-ranking soldier intent on dumping secret information to WikiLeaks, or a rogue Sergeant going on a shooting rampage, insider threats can seriously plague the military and the government as a whole.
Taking a novel approach, the Pentagon is spearheading research into studying the predictive behavior of personnel in the lead-up to a betrayal.
From Army Times:
The Army’s efforts dovetail with a broader federal government initiative. President Obama signed an executive order last October that established an Insider Threat Task Force to develop a government wide program to deter, detect and mitigate insider threats.
Among other responsibilities, it would create policies for safeguarding classified information and networks, and for auditing and monitoring users.
In January, the White House’s Office of Management and Budget issued a memo directing government agencies that deal with classified information to ensure they adhere to security rules enacted after the WikiLeaks debacle.
Beyond technical solutions, the document asks agencies to create their own “insider threat program” to monitor employees for “behavioral changes” suggesting they might leak sensitive information.
Gould points to a DARPA research solicitation for Suspected Malicious Insider Threat Elimination (SMITE) which would track employees’ actions on their networked computers — in particular, seemingly insignificant “observational data of no immediate relevance” — to determine if the user’s overall behavior is leading to something malicious.
"Forensic-like techniques can be used to find clues, gather and evaluate evidence and combine them deductively. Many attacks are combinations of directly observable and inferred events,” states the solicitation, emphasizing the word “inferred”.
Behavioral studies try to “look beyond computers to spot the point when a good soldier turns” — whether the attack at hand is an information leak, or even a homicide.
A solicitation for another program — Anomaly Detection at Multiple Scales, or ADAMS — uses accused Fort Hood shooter Maj. Nidal Hasan to frame the problem. It asks how to sift for anomalies through millions of data points — the emails and text messages on Fort Hood, for instance — using a unique algorithm, to rank threats and learn based on user feedback.
The Software Engineering Institute of Carnegie Mellon sheds light on what kind of character profile a once trusted employee-turned-threat would display. There are two noteworthy profiles of someone who would steal and leak intellectual information from his/her workplace:
All of the government’s ongoing research and exploration into “computer forensics” will culminate in new standards of defense against internal attacks later this year. The Insider Threat Task Force is expected to be unveiled in October.
In the wake of the biggest dump of classified information in the history of the Army, the brass is searching for ways to watch what every soldier is doing on his or her Army computer.
The Army wants to look at keystrokes, downloads and Web searches on computers that soldiers use.
Maj. Gen. Steven Smith, chief of the Army Cyber Directorate, said the software was one of his chief priorities, joking that it would take the place of a lower-tech solution: “A guy with a large bat behind every user as they go to search the Internet.”
“Now we’ve been in the news — I don’t know if you’ve seen it — with a little insider threat issue,” Smith continued.
Smith did not mention Pfc. Bradley Manning by name. However, the effort comes in the wake of the former intelligence analyst’s alleged leak of hundreds of thousands of pages of classified documents to the anti-secrecy organization WikiLeaks in 2009 and 2010. Manning faces a military trial on 22 counts, including aiding the enemy.
According to Smith, the Army will soon shop for software pre-programmed to detect a user’s abnormal behavior and record it, catching malicious insiders in the act. Though it is unclear how broadly the Army plans to adopt the program, the Army has more than 900,000 users on its computers.
Smith explained how it might work.
“So I’m on the South American desk, doing intelligence work and all of a sudden I start going around to China, let’s say,” Smith said. “That might be an anomaly, it might be justified, but I would sure like to know that and let someone make a decision, almost at the speed of thought.”
The scenario echoes the allegations against Manning: As an intelligence analyst charged with researching the Shiite threat to Iraqi elections, Manning raided classified networks for State Department cables, Afghanistan and Iraq war logs and video from a helicopter attack, according to courtroom testimony.
Software of the type Smith describes is at various stages of development in the public and private sectors. Such software could spy on virtually any activity on a desktop depending on its programming, to detect when a soldier searches outside of his or her job description, downloads massive amounts of data from a shared hard drive or moves the data onto a removable drive.
The program could respond by recording the activity, alerting an administrator, shutting down the user’s access, or by feeding the person “dummy data” to watch what they do next, said Charles Beard, a cybersecurity executive with the defense firm SAIC’s intelligence, surveillance and reconnaissance group.
“It’s a giant game of cat and mouse with some of these actors,” Beard said.
What’s exciting, Smith said, is the possibility of detecting problems as they happen, on what cybersecurity experts call “zero day,” as opposed to after the fact.
“We don’t want to be forensics experts. We want to catch it at the perimeter,” Smith said. “We want to catch this before it has a chance to be exploited.”
A private company doing the government’s work does not face the same privacy restrictions.
The U.S. House of Representatives is expected to pass a reprehensible cyber-security bill that seeks to protect online companies—giant social media firms to data-sharing networks controlling utilities—from cyber attack. It is reprehensible because, as Democratic San Jose Rep. Zoe Lofgren said this week, it gives the federal government too much access to the private lives of every Internet user. Or as Libertarian Rep. Ron Paul also bluntly put it, it turns Facebook and Google into “government spies.”
But that’s not the biggest problem with the Congress’s urge to address a real problem—protecting the Internet from cyber attacks. While House passage launches a process that continues in the Senate, the bigger problem with the best known of the cyber bills before the House, CISPA, the Cyber Intelligence Sharing and Protection Act, is not what is in it — which is troubling enough — but what is not on Congress’s desk: a comprehensive approach to stop basic constitutional rights from eroding in the Internet Age.
“I don’t think the current cyber-security debate is adequately protecting civil liberties,” said Anjali Dalal, a resident fellow with the Information Society Project at Yale Law School (and a blogger). “CISPA seems to place constitutionally suspect behavior outside of judicial review. The bill immunizes all participating entities ‘acting in good faith.’ So what happens when an ISP hands over mountains of data under the encouragement and appreciation of the federal government? We can’t sue the government, because they didn’t do anything. And we can’t sue the ISP because the bill forbids it.”
What happens is anybody’s guess. But what does not happen is clear. The government, as with the recently adopted National Defense Authorization Act of 2012, does not have to go through the courts when fighting state “enemies” on U.S. soil. Instead, CISPA, like NDAA, expands extra-judicial procedures as if America’s biggest threats must always be addressed on a kind of wartime footing. Constitutional protections, starting with privacy rights, are mostly an afterthought.
The CISPA bill takes an information-sharing approach to fight cyber attacks. Nobody has said there’s a problem with the government giving classified information to private firms to stop attacks. It is the opposite of that—Internet companies sharing information about users and their online activities—that raises civil liberties red flags. In general, the courts distinguish between public and private aspects of online activity, holding, for example, that e-mail addresses, subject lines and traffic patterns are like snail-mail addresses on the outside of a paper envelope—they are public. But just as a letter’s contents are private, courts have said that is true with online activity—although in a recent Supreme Court case involving wireless surveillance, Justice Sonia Sotomayor raised the question of how much privacy people should expect in their online activities.
Location services company Navizonhas a new system, called Navizon I.T.S., that could allow tracking of visitors in malls, museums, offices, factories, secured areas and just about any other indoor space. It could be used to examine patterns of foot traffic in retail spaces, assure that a museum is empty of visitors at closing time, or even to pinpoint the location of any individual registered with the system. But let’s set all that aside for a minute while we freak out about the privacy implications.
Most of us leave Wi-Fi on by default, in part because our phones chastise us when we don’t. (Triangulation by Wi-Fi hotspots is important for making location services more accurate.) But you probably didn’t realize that, using proprietary new “nodes” from Navizon, any device with an active Wi-Fi radio can be seen by a system like Navizon’s.
To demonstrate the technology, here’s Navizon CEO and founder Cyril Houri hunting for one of his colleagues at a trade show using a kind of first person shooter-esque radar.
In his first television interview since he resigned from the National Security Agency over its domestic surveillance program, William Binney discusses the NSA’s massive power to spy on Americans and why the FBI raided his home after he became a whistleblower. Binney was a key source for investigative journalist James Bamford’s recent exposé in Wired Magazine about how the NSA is quietly building the largest spy center in the country in Bluffdale, Utah. The Utah spy center will contain near-bottomless databases to store all forms of communication collected by the agency, including private emails, cell phone calls, Google searches and other personal data.
Binney served in the NSA for over 30 years, including a time as technical director of the NSA’s World Geopolitical and Military Analysis Reporting Group. Since retiring from the NSA in 2001, he has warned that the NSA’s data-mining program has become so vast that it could “create an Orwellian state.” Today marks the first time Binney has spoken on national television about NSA surveillance. This interview is part of a 4-part special. Click here to see segment 2, 3, and 4. [includes rush transcript]
SEC. 31406. VEHICLE EVENT DATA RECORDERS.
(a) Mandatory Event Data Recorders-
(1) IN GENERAL- Not later than 180 days after the date of enactment of this Act, the Secretary shall revise part 563 of title 49, Code of Federal Regulations, to require, beginning with model year 2015, that new passenger motor vehicles sold in the United States be equipped with an event data recorder that meets the requirements under that part…
(d) Revised Requirements for Event Data Recorders- Based on the findings of the study under subsection (c), the Secretary shall initiate a rulemaking proceeding to revise part 563 of title 49, Code of Federal Regulations. The rule—
(1) shall require event data recorders to capture and store data related to motor vehicle safety covering a reasonable time period before, during, and after a motor vehicle crash or airbag deployment, including a rollover;
(2) shall require that data stored on such event data recorders be accessible, regardless of vehicle manufacturer or model, with commercially available equipment in a specified data format;
(3) shall establish requirements for preventing unauthorized access to the data stored on an event data recorder in order to protect the security, integrity, and authenticity of the data; and
(4) may require an interoperable data access port to facilitate universal accessibility and analysis.
(e) Disclosure of Existence and Purpose of Event Data Recorder- The rule issued under subsection (d) shall require that any owner’s manual or similar documentation provided to the first purchaser of a passenger motor vehicle for purposes other than resale—
(1) disclose that the vehicle is equipped with such a data recorder…
(f) Access to Event Data Recorders in Agency Investigations- Section 30166(c)(3)(C) of title 49, United States Code, is amended by inserting ‘, including any electronic data contained within the vehicle’s diagnostic system or event data recorder’ after ‘equipment.’
EFF, OpenMedia.ca, CIPPIC and a number of civil society organizations have declared this to be ‘Stop Cyber Spying Week’ in protest of several controversial U.S. cybersecurity legislative proposals, including the bill currently before Congress and the Senate called CISPA, the Cyber Intelligence Sharing & Protection Act of 2011. While ‘Stop Cyber Spying Week’ is focused on U.S. initiatives, Canadians should be concerned as well as the adoption of a privacy-invasive U.S. cybersecurity strategy is likely to have serious implications for Canadian civil liberties. For this reason, Canadian civil society groups have joined the protest. In general, Canadians would do well to remain vigilant.
Using the guise of ‘cybersecurity’, CISPA aims to mobilize Internet intermediaries to institute a sweeping, privacy-invasive, voluntary information-sharing regime with few safeguards. The U.S. cybersecurity strategy, embodied in CISPA and other legislative proposals, also seeks to empower Internet companies to deploy ill-defined ‘countermeasures’ in order to combat these threats. Use of these powers is purportedly limited to situations addressing ‘cybersecurity’ threats, yet this term is so loosely defined that it can encompass almost anything – even,potentially, to investigate potential breaches of intellectual property rights!
The cornerstone of the privacy-invasive CISPA component is the establishment of private-public partnerships for information sharing. This creates a two-tiered regime that, on the one hand, facilitates the collection of personal Internet data by private Internet companies as well as the sharing of that information with the government and, on the other, allows government agencies to share information with private companies.
To enable information flows from Internet companies to government agencies, CISPA will grant Internet companies immunity from civil or criminal liability for any monitoring or sharing of user activity—as long as it is done in ‘good faith.’ Specifically, CISPA authorizes companies to “use cybersecurity systems to identify and obtain cyber threat information.” Aggrieved users who sue Internet companies for wrongfully handing over their data to the government willhave to meet the incredibly high bar of proving the decision comprised ‘willful misconduct.’
The U.S. cybersecurity strategy will also permit Internet companies to employ dubiously defined ‘countermeasures,’ provided they are justified with equally vague and undefined ‘defensive intent.’ Internet companies will be permitted to deploy ‘cybersecurity systems’ – products designed to ‘safeguard…a network from efforts to degrade, disrupt, or destroy’. While it is unclear exactly what this would permit an Internet company to do, it could allow blocking of specific websites or individuals or even a much broader range of filtering. Given the potentially all-encompassing and inclusive definition of ‘cybersecurity’, it would not be surprising if these ‘countermeasures’ were ultimately used to block online entities such as Wikileaks or sites accused of copyright infringement. The inclusion of ‘degrade’ in the definition of permissible ‘cybersecurity systems’ could even raise net neutrality concerns, as ISPs have, in the past, claimed ‘network degradation’ as justification for the throttling of downstream services such as peer-to-peer applications. Indeed, U.S. cybersecurity laws have a history of being employed by private Internet companies to stifle downstream competition.
In sum, the U.S. cybersecurity strategy envisions a voluntary cooperative regime where Internet companies are given broad-ranging immunities to surveil Internet users and downstream online services. This amounts to an erosion of personal privacy safeguards currently in place. Under this regime, an online company need only to assert a vague ‘cybersecurity objective’ and it will have carte blanche to bypass domestic laws and protections against privacy invasion.
Chances are, you’ll snag the wrong people, and when you do, how can you tell? How do you clear suspects of crimes that haven’t happened?
Pre-crime prevention is a terrible idea.
Here is a quiz for you. Is predicting crime before it happens: (a) something out of Philip K. Dick’s Minority Report; (b) the subject of of a Department of Homeland Security research project that has recently entered testing; (c) a terrible and dangerous idea which will inevitably be counter-productive and which will levy a high price in terms of civil liberties while providing little to no marginal security; or (d) all of the above.
If you picked (d) you are a winner!
The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection and prevention software which may come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.
First, predictive software of this kind is undermined by a simple statistical problem known as thefalse-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations — an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHSdocumentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes — which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.
The second major problem with FAST is the experimental methodology being used to develop it. According to a DHS privacy impact assessment of the research, the technology is being tested in a lab setting using volunteer subjects. These volunteer participants are sorted into two groups, one of which is “explicitly instructed to carry out a disruptive act, so that the researchers and the participant (but not the experimental screeners) already know that the participant has malintent.” The experimental screeners then use the results from the FAST sensors to try and identify participants with malintent. Presumably this is where that 70 percent number comes from.
The validity of this procedure is based on the assumption that volunteers who have been instructed by researchers to “have malintent” serve as a reasonable facsimile of real life terrorists in the field. This seems like quite a leap. Without actual intent to commit a terrorist act — something these volunteers necessarily don’t have — it is likely to be difficult to have test observations that mimic the actual subtle cues a terrorist might show. It would seem that the act of instructing a volunteer to have malintent would make that intent seem acceptable within the testing conditions, thereby altering the subtle cues that a subject might exhibit. Without a legitimate sample exhibiting theactual characteristics being screened for — a near impossible proposition for this project — we should be extremely wary of any claimed results.
The fact is that the world is not perfectly controllable and infallible security is impossible. It will always be possible to imagine incremental gains in security by instituting increasingly invasive and opaque algorithmic screening procedures. What we should be thinking about, however, is the marginal gain in security in relation to the marginal cost. A program like FAST is doomed from the word go by a preponderance of false positives. We should ask, in a world where we are already pass through full-body scanners, take off our shoes, belts, coats and only carry 3.5 oz containers of liquid, is more stringent screening really what we need and will it make us any safer? Or will it merely brand hundreds of innocent people as potential terrorists and provide the justification of pseudo-scientific algorithmic behavioral screening to greater invasions of their privacy? In this case the cost is likely to be high, and there is little evidence that the gain will be meaningful. In fact, the results may be counter-productive as TSA and DHS staff are forced to divert their attention to weeding through the pile of falsely flagged people, instead of spending their time on more time-tested common-sense screening procedures.
Thinking statistically tells us that any project like FAST is unlikely to overcome the false-positive paradox. Thinking scientifically tells us that it is nearly impossible to get a real, meaningful sample for testing or validating such a screening program — and as a result we shouldn’t trust the sparse findings we have. And thinking about the marginal trade off we are making tells us the (possible) gain is not worth the cost. Pick your reason, FAST is a bad idea.
Increasingly, the U.S. government has shown an intense desire to “friend” you, to “follow” you, to get to know your every online move.
Now they’re channeling that desire towards legislation that clears a path for authorities to work with companies like Facebook, Google and AT&T to snoop on Internet-using Americans.
The Cyber Intelligence Sharing and Protection Act or CISPA, is wending its way through Congress where it could get a favorable vote unless elected representatives hear their constituents’ concerns in time.
That’s why a coalition of online rights advocates (including the Free Press Action Fund) have joined forces to kill CISPA before more of our online rights are lost to those seeking to turn the Internet into a massive surveillance complex.
Promoted to protect our national interests against a loosely defined horde of cyber-terrorists, CISPA goes far beyond its stated purposes, sacrificing almost all of our online privacy rights without any safeguards against abuse. It’s the type of misguided Internet legislation that we have seen in the past, where government and corporations craft restrictive new laws without giving Internet users a seat at the table. Will they never learn?
Groups including EFF, Avaaz.org, Free Press Action Fund, ACLU, Access, CDT and the American Library Association have just launched “Stop Cyber Spying Week” so that Washington understands that the online rights of millions of Americans are not negotiable. In addition tohelping Americans contact Congress, these groups have unleashed the power of Twitter against any legislator weighing a vote for this bad bill.
The folks behind CISPA claim that national security interests make this surveillance necessary, but the bill’s language is so vague and overreaching that it opens the door for rampant abuse. Here’s what’s wrong:
CISPA could lead all too easily to governmental and corporate attacks on our digital freedoms. And while there is a real need to protect vital national interests from cyber attacks, we can’t do it at the expense of our rights.
Facebook, which supports CISPA, now counts more than 800 million users worldwide. It’s frightening to imagine a world where Mark Zuckerberg and his colleagues could act with impunity to help the U.S. government keep tabs on all of us.
The goal of Stop Cyber Spying Week is simple: Get Congress to back away from this dangerous legislation. The only way to do that is by speaking out.
"We have got to defeat this attack on the freedom of the mind…But it takes courage for a young man with a family to stand up to it; all the more obligation on those of us who have nothing left to lose. At any age it is better to be a dead lion than a living dog – though better still, of course, to be a living and victorious lion – but it is easier to run the risk of being killed (or fired) in action if before long you are going to be dead anyway. This freedom seems to me the chief consolation of old age." - Elmer Davis: Grandeur and the Miseries of Old Age.
"It seems to me like the government of the United States is, in itself, planning a rebellion against America," wrote Gordon Duff, the senior editor of Veterans Today, on April 11. “Passing laws that violate the constitution is a crime.”
By the time history’s most evil and Godless Empire collapses, it will have already opened a new theater of war—-and its last—-in North America.
II. The North American War Theater: Objectives, Tactics, And Agents of Oppression
1. Consolidate economic, military, and political power in a fascist and authoritarian global government that will strengthen the rule of corporate power over the governments and peoples of America, Canada, and Mexico. This will be a government of, by, and for the 0.1 percent.
2. Create a two-tiered society, where there is no rule of law, and no sense of security for the common citizens.
3. Control and regulate the flow of movement of the general public across the continent so as to regiment society and make it easier for the rulers to dominate their mindless prey.
4. Disarm the American and Canadian people as part of the larger program to pacify pockets of resistance to the illegitimate rule of the regional and global governmental authorities.
5. Destroy the wealth of the middle class, and restratify society. Turn the people into serfs, and keep their attention on day-to-day survival strategies and mindless distractions as opposed to big picture political solutions.
7. Rewrite the laws of the United States and Canada, and justify the radical changes by using the invented threat of terrorism. Pass nonsensical counter-terrorism legislation with the ultimate goal of creating a new global legal and political order that is authoritarian in nature.
9. State-approved churches and religious establishments.
10. Central planning departments.
A bill that nobody is paying any attention to is sailing through Congress: Senate Bill 1813. It passed the Senate by 74 to 22, and is expected to sail through the House as well. It’s an act “[t]o reauthorize Federal-aid highway and highway safety construction programs, and for other purposes.”
It’s the “and for other purposes” part of the title that has me worried—specifically Section 40304: “Revocation or denial of passport in case of certain unpaid taxes.”
This section would give the IRS the power to keep a U.S. citizen from traveling—
—and it’s another example of Executive Power run amok. It’s another example of how the United States is turning into a police-state.
The right to travel freely is sacrosanct—it’s not some privilege that the government bestows on us: It’s one of our basic freedoms as citizens. In point of fact, the countries that have limited their citizens’ ability to travel—the Soviet Union, the People’s Republic of China, North Korea, Cuba—were all rightfully called “police-states”: It’s one of their defining characteristics—the fact that they were keeping their citizens hostage.
In the United States, there are several, clearly defined reasons why you would have your passport either denied or revoked—and all of them pass the smell test.
In the case of a passport being denied, according to the U.S. State Department, the reasons are:
“a federal warrant of arrest, a federal or state criminal court order, a condition of parole or probation forbidding departure from the United States (or the jurisdiction of the court), or a request for extradition [by a foreign country].
Additionally, failure to pay a court-ordered child-support in excess of $5,000 can also be grounds for the State Department to refuse to issue a passport to a U.S. citizen.
In the case of a passport being revoked, the law (22 CFR 51.72) says very clearly that:
A passport may be revoked or restricted or limited where:
(a) The national would not be entitled to issuance of a new passport under §51.70 or §51.71 [the above conditions]; or
(b) The passport has been obtained illegally, by fraud, or has been fraudulently altered, or has been fraudulently misused, or has been issued in error; or
(c) The Department of State is notified that a certificate of naturalization issued to the applicant for or bearer of the passport has been canceled by a federal court.
[54 FR 8532, Mar. 1, 1989, as amended at 64 FR 19714, Apr. 22, 1999]
Now, notice how both in the case of a denial or a revocation of a passport, the State Department is essentially carrying out the judgment of the courts.
The tiny town of Lakota, N.D., is quickly becoming a key testing ground for the legality of the use of unmanned drones by law enforcement after one of its residents became the first American citizen to be arrested with the help of a Predator surveillance drone.
The bizarre case started when six cows wandered onto Rodney Brossart’s 3,000 acre farm. Brossart, an alleged anti-government “sovereignist,” believed he should have been able to keep the cows, so he and two family members chased police off his land with high powered rifles.
After a 16-hour standoff, the Grand Forks police department SWAT team, armed with a search warrant, used an agreement they’ve had with Homeland Security for about three years, and called in an unmanned aerial vehicle to pinpoint Brossart’s location on the ranch. The SWAT team stormed in and arrested Brossart on charges of terrorizing a sheriff, theft, criminal mischief, and other charges, according to documents.
Brossart says he “had no clue” they used a drone during the standoff until months after his arrest.
"We’re not laying over here playing dead on it," says Brossart, who is scheduled to appear in court on April 30. He believes what the SWAT team did was "definitely" illegal.
"We’re dealing with it, we’ve got a couple different motions happening in court fighting [the drone use]."
Repeated calls to Brossart’s attorney were not returned. Douglas Manbeck, who is representing the state of North Dakota in the case, says the drone was used after warrants were already issued.
"The alleged crimes were already committed long before a drone was even thought of being used," he says. "It was only used to help assure there weren’t weapons and to make [the arrest] safer for both the Brossarts and law enforcement."
"I know it’s a touchy subject for anyone to feel that drones are in the air watching them, but I don’t think there was any misuse in this case," he added.