2013-03-03 14:37:50 by chort
Some times I need to move GPG/PGP secret keys around, but I get very nervous about having them "in flight." Of course the passphrase protects they key, but call me paranoid. I had been encrypting with OpenSSL, then decrypting right before import, than rm -P (or shred -u) the file. Wouldn't it be nice to skip the step of having the key decrypted on disk at all? Turns out gpg can read from STDIN (and so can OpenSSL), so it's very simple.
srchost$ gpg --export-secret-key -a "user@domain" \ | openssl aes-256-cbc -a -salt -out user.key.enc dsthost$ openssl aes-256-cbc -d -a -in user.key.enc \ | gpg --allow-secret-key-import --import -
gpg: secret keys imported: 1
- Comments (0)
2012-11-27 07:20:21 by chort
Today I tried to download some anti-virus software from the manufacturer's site. When I clicked the Download button embedded in their site, it sent me to a CNET download page, which I assume would have downloaded one of those special CNET installers. I say assume, because I didn't actually bother to download it once I realized I had been redirected to CNET.
That was an example of a wrong way to provide a software download, but what is the correct way to do it?
- Comments (0)
2012-11-05 07:37:35 by chort
Today I was directed to a blog post from VMware that discloses a leak of ESX source code. What struck me wasn't the leak itself, but the mention of security hardening guides. This isn't unique to VMware. Just about every enterprise IT vendor has hardening guides or knowledge base articles for how to take the default configuration, apply a bunch of changes, and make it more secure. This prompted me to muse about some ideal future where vendors instead post "softening guides" for the rare customer who wants to downgrade the default, highly-secure configuration.
Isn't that just wishful thinking on my part? Isn't it a good thing that vendors make the effort to create and publish hardening guides? I'll tell you why I think hardening guides are fundamentally dishonest and customers should demand better.
- Comments (0)
2012-10-19 23:09:05 by chort
This week there has been a debate about "security rockstars," which I've mostly tuned out. Today a comment jogged my memory and I recall that last year a PR consultant for our company (who appears to do a fairly competent job, not that I would know) heard that I was submitting a CFP for a conference. She told me "I made [insert name of "thought leader"] a rockstar. [person]'s blog now receives [number] of impressions a day. I can help you do the same thing."
I don't really fault the consultant here. She was trying to a) bill more hours (who doesn't want to do that?) and b) get more publicity for the company I work for (which is what we pay her for). I'm pretty sure she's good at her job and she chose her words carefully. This leads me to believe that her pitch is tailored to work on geeks like me. In my case, I did a polite version of running away screaming.
For my own satisfaction, it means a lot more to me to do work that I know is high-quality, know I'm helping other people, and be respected by my peers. I don't want teaming masses who barely know me to hold me up as some shining example when they don't even understand what I'm saying. I also don't want the pressure of being expected to be amazing all the time. I'm human, I make mistakes. I don't want my every decision under a microscope, so I don't go seek out publicity. It seems simple to me.
I realize different people have different priorities, and other people derive their self-worth in other ways. That's OK with me. If someone wants to be a "rockstar," fine. Just remember, with popularity comes scrutiny. The same people who held you on their shoulders will be twice as quick to kick you when you're down.
For everyone else, if you're sick of rockstars, stop feeding their behavior. PR reps wouldn't pitch geeks on becoming "rockstars" if it wasn't something a lot of geeks aspired to.
- Comments (0)
2012-10-15 20:56:43 by chort
Just some quick notes on the steps I had to do differently from the main documentation.
* You need to install build_essential, libboost-python-dev, and setuptool (possibly autoconf too, if build_essential doesn't install that)
* Apply static const int kMaxNumFunctionParameters = 65534; to v8/src/parser.h manually (V8-patch2.diff is out of date)
* Beautiful Soup 4 (python-bs4), html5lib, PEfile (python-pefile), chardet, httplib2, Zope.Interface (python-zope.interface), and scons are all in packages, search for them with apt-cache search and install with sudo apt-get install
* You need to add --enable-python-bindings to options when configuring libemu
* Create /etc/ld.so.conf.d/libemu.conf with the line /opt/libemu/lib and run sudo ldconfig
I think that's it. I haven't tried analyzing any content yet, but python thug.py -h works at least. Let me know if this is helpful (or is missing a step).
PS there was a guide for this already. Derp. It's prettier and more complete. Just remember to manually change v8/src/parser.h.
- Comments (0)
2012-09-24 22:12:59 by chort
Lately the security echo chamber has been reverberating with talk of information sharing. Many parties, including (in possibly the most ironic blog post of the year, Oracle) are calling on the industry in general to share more information. The call is not unanimous, however. Several voices have urged restraint with information disclosure. Each side has good arguments and I think everyone can agree that the status quo is not working. I urge more sharing, read-on to see why.
- Comments (0)
2012-08-04 22:54:34 by chort
- Comments (0)
2012-08-04 16:36:24 by chort
What do Information Security and weight loss have in common? Many people who pretend to be interested in each try to get desirable results without making any substantial changes. I recently posed the question "would you hire a trainer & ask them to make you skinny & fit, as long as no exercise or diet change?" It was rhetorical of course, but one of the replies pointed out that most Americans would do just that.
Sadly I feel like I spent several years running late-night infomercials, selling expensive gadgets to people who wouldn't really use them. Sadder still is the prevailing attitude in the IT industry that buying a product is the solution to every tough problem, because it's easier to whip out the corporate checkbook than it is to solve in a thoughtful way. The problem, as most of my peers know, is that these products rarely solve anything on their own. The only benefit is an organizational perception that "something has been done." When security incidents happen, because the issue wasn't solved comprehensively, everyone is shocked and loudly protests "but we were following industry best-practices!" The people who say things like that actually believe it. How can we change the tune?
- Comments (0)
2012-03-17 21:37:34 by chort
There has been a lot of noise recently about whether it's worth the cost to run anti-virus software. As laid-out in the Wired article, the opposing viewpoints typically boil down to:
FOR: Anti-virus is essential for protecting careless users.
AGAINST: There are more effective ways to spend security budget.
Those are both good points, so I think making a purely binary use it/don't use it decision is short-sighted.
Before I get to the main point, I'd also like to note the only source on-record in that article vigorously defending anti-virus is a giant analyst firm. You don't have to think very hard to see a huge economic reason for a company that makes a lot of money off of vendors being a vocal cheerleader for the two companies who dominate all security spending. A cynic might wonder how good the advice is they're getting from an analyst who puts the interest of their own firm ahead of their customers. It seems there's a lot of that going around. I'd wager this situation contributes greatly to the suspicion of giant AV vendors.
- Comments (0)
2012-03-10 13:35:28 by chort
One of the consistent themes I heard from attendees of B-Sides SF and RSAC this year was "this was the best year yet!" That is a huge turn-around from the cynicism that was so prevalent last year. I haven't quite put my finger on a root-cause for that sentiment, but perhaps it has something to do with increased focus on people and process over technology. Although I didn't take detailed notes this year, I will attempt to summarize the concepts from each of the sessions I attended and some of the "hallway track" themes.
SCADA Security: Why is it so hard? - Amol Sarwate
In many ways this talk was a rehash of the SCADA talks we're used to now: Lifecycles are long, field upgrades are hard, the protocols are brittle, the control networks aren't air-gapped, etc, etc. The only new information for me was the realization that Wireshark already has solid protocol analyzer support for many SCADA/ICS protocols (such as Modbus), and the news that Qualys are releasing a protocol-aware SCADA scanner for DNP 3 and Modbus. The advantage of such a scanner vs. traditional network tools such as NMAP is that the former is less likely to crash delicate SCADA endpoints.
At the end of the presentation, Joseph Weiss stood up and made a impassioned, yet unconvincing speech. He rattled off numbers of people killed and facilities damaged by "cyber attacks," but didn't cite any sources or credible evidence. The crowd reception could best be described as incredulous. I came away with the sense that Joe is dangerous and irrational, but maybe one of us just hadn't had enough coffee.
Automating Security for the Cloud: Why we all need to care… - Rand Wacker
I was hoping this presentation was going to explain how to automated cloud security, but it turned out to be more why automating security is necessary [in retrospect, the title does say "why" so it was wishful thinking on my part]. Perhaps this is news to some folks. The only useful tidbit I picked up was that attackers are rapidly creating new VMs in cloud provider environments, trying to grab an IP lease that was recently used by another VM. They use the new VMs to scan for other VMs that allow trusted access based on IP address. In this manner attackers can impersonate previous VMs and gain access to services that are protected only by host firewalls. This is certainly a type of attack enterprises don't have to deal with on their private networks and goes to show that stronger authentication is needed beyond simple IP ACLs.
We are Handling Security the Wrong Way - Brett Hardin
This talk started off well, encouraging security practitioners to limit conclusions to those supported by data, and to readily accept challenges to our assumptions. In addition, it was suggested that outcomes should be used as feedback into future decisions (first of several talks to link incidents and metrics). It meandered a bit through the limitations of vulnerability assessment (referred to as "vulnerability management") software, and noted the frustrations of developer education. I didn't walk away with a good sense of what the next step is.
I chatted with Brett on the developer education topic after the presentation. He revealed that his experience did not show a quantifiable reduction in bugs per lines of code over a one year period. I related my positive experience in building rapport with developers, but acknowledged that I'm far from being able to measure the impact. We agreed that it's tough to scale a diplomacy approach, since so many security practitioners are not naturally adept in interpersonal relations. Unfortunately we weren't able to pursue the conversation beyond that.
So you want to be the CSO... - Daniel Blander
The key points from this talk were: Don't attempt to operate security programs in a vacuum (understand what business process you're protecting and why), be able to communicate a real value for security projects (as opposed to employing FUD), and understand the motivations of the different actors within your organization. Essentially broaden your horizon beyond pure tech and figure out how the people and processes interact to form the system.
Metrics That Don’t Suck: A New Way To Measure Security Effectiveness - Dr. Mike Lloyd
This was the second talk in just the first day to mention security metrics. Dr. Lloyd's talk was full of optimism and can-do spirit, which was appreciated. The presentation highlighted the use of metrics by the US Department of State in measure the vulnerabilities present in systems at US embassies, and a process for creating attack-chains that visualized systems at risk via other systems.
I think there's a lot of value in simply starting these kinds of measurement programs, but I had the nagging suspicion the attack-chain model only represented a narrow slice of actual risk, since it focused on outside-in attacks through firewall ACLs into protected DMZs. With the rise in popularity of phishing and other social engineering attacks, a lot of systems are directly at risk that aren't visible inbound through a firewall. When I asked Dr. Lloyd whether they had thought of employing the attack-chain in reverse, i.e. start from a valuable server and see what all could reach it, he replied that the resulted tended to not be useful, since it often pointed to an anti-virus management console or monitoring system. He said that wasn't vary useful for assessing risk, but myself and a few researches seated near me noted that these systems are prime targets for penetration testers and malicious actors for exactly the reasons mentioned (everything on the network can reach them, and they can reach everything).
How NOT To Do Security: Lessons Learned From The Galactic Empire - Kellman Meghu
This was a light-hearted talk full of pop-culture lulz, but little substance. It was the perfect talk to start a morning.
2012: The End of Security Stupidity - Amit Yoran, Kevin Mandia, Ron Gula and Roland Cloutier
As we were taking our seats for this talk, several people near me noted that panels are often shallow and light on useful information, relying on name-recognition to pull a large audience. I agreed and braced myself for a potentially mind-numbing session of self-congratulation and circular back-patting. Fortunately that was not the case.
Ron got things off to an interesting start by suggested Anonymous are the best things to happen to the information security industry. From there it delved into a deep discussion of the futility of preventative security controls and the importance of incident response and forensics. Kevin memorably stated "you're only as good as your best forensicator," meaning the effectiveness of your security is largely determined by the skill of your employees. Roland described how the security program at his organization has shifted drastically to focus on response. He said he talks to other organizations that don't have responders and he doesn't understand how they can function without them.
There was a lively round of audience participation at the end of the session. The best question was regarding how organizations could train incident responders to cope with the demand, noting that traditional IT security employees don't have forensic and malware analysis skills. Roland shared that his organization partners with local schools and colleges to hire interns to work on projects. He likes to get young students excited about information security to steer their study focus in school towards forensics and other relevant areas. He called out the University of Maryland, among others, as having a strong emerging infosec program. The panel in general encouraged organizations to find young talent and train them from scratch, rather than trying to convert old-school IT security practitioners who focus on firewalls and security appliances.
There were a number of other interesting topics and discussions during the panel that I simply don't have the room to cover. Suffice it to say this panel was my favorite session of the week. If you weren't there, you really missed out. Look for incident response to become an increasingly important topic this year. If you're a career infosec engineer who has focused heavily on security appliances, you need to rapidly adopt a new skill set or risk being passed over for a new generation of security workers.
Fundamental Flaws in Security Thinking - Martin McKeay
This talk focused on some of the erroneous assumptions about the security industry. People often assume that the goal of security is 100% safety from attacks, but that is simply unattainable. Striving for perfection is only going to burn people out and disappoint other parts of the organization (chiefly, management). Security professionals need to set reasonable expectations for how frequently attacks will succeed and what can be done to mitigate the impact. Hand in hand with that is the idea that security professional are solely accountable for all success and failure relating to the security of data and operations. In reality, many parts of an organization are responsible for the security of the system, so that should be communicated and understood widely. Security professionals shouldn't try to take the weight of the world on their shoulders.
Money$ec Evolved - Jared Pfost and Brian Keefer
Since I was involved in this presentation, I will only summarize it briefly. Jared and I talked about the necessity of using incident response and root-cause analysis to measure the effectiveness of security controls. Jared pointed out additional ways that mature organizations can improve their efficiency through metrics, and how to communicate those visually and through narrative. We also gave a shout-out to Ben Sapiro's "We Are Losing" blog post. You can find our slides on the Third Defense Blog.
Your IR Team: More than Firemen and Maids - Wade Baker and Christopher Porter
By my count, the fourth B-Sides SF talk this year to heavily feature statistics and suggest setting metrics. The presentation made an argument for formally tracking and classifying incidents, for instance using the VERIS framework. The talk was quite compelling and did a good job illustrating how incidents can be charted and visualized.
Unfortunately, when I visited the VERIS wiki I found it rather disorganized. To me, the wiki doesn't do a good job of communicating how the framework can be implemented and throws up a wall of words rather than diagrams and practical implementations. In all fairness it is under construction, and does give some example, but more concrete tools would be welcome. If someone would release a spreadsheet template or simple app (Python, Ruby, etc) to jump-start organizations on their incident classification, that would be a huge public service.
Get Secure or Die Tryin' - Dave Shackleford
This talk was a great way to close out the conference, with a laugh a minute as Dave shared some of his real life pentest experiences. Although the main thrust was humor and catharsis, it did highlight how simple things like shared admin passwords, failure to audit the domain admin group membership, and failure to check for the most basic flaws in web apps can bring organizations to their knees.
Beyond the presentations, I had some really fantastic conversations at B-Sides. I got to talk with Adam Shostack about the work Microsoft is doing to improve User eXperience related to security. I understood the process to be identifying where users have insufficient information to make an informed decision, and either providing the appropriate information, or removing the choice. It's more nuanced than that and deserves a much deeper explanation, but that's the abstract concept.
I also spent a long time talking to Julia Wolf on far-ranging topics from malware reversing to the history of UNICODE. Hopefully we can expect some new posts from her on the FireEye Blog and perhaps a really fascinating piece of reversing will be revealed soon at a conference near you (I got a detailed walk-through and I assure you it will make a riveting presentation).
SIDEBAR: At a time when everyone loves to whine about how little information is being shared, I would like to point out how incredibly valuable it is to have folks like Julia, diocyde, Mila Parkour, Gary Golomb, Brandon Dixon, etc posting their research. I never would have worked up the motivation to get into forensics and malware analysis if it wasn't for their excellent reference sources. Mad props to everyone sharing their research. It's making a difference.
On Thursday I finally made it to the expo floor at RSAC (using a fake name, although I didn't find that newsworthy at the time) and had a chance to walk around. Although it was a lot more of the same as usual, I did get to visit a lot of new vendors who are working on problems I care about. One thing that helped a lot this year was having many contacts from Twitter who could provide feedback and help setup meetings. That made the floor-search process much more rewarding. For example, I met with David Mortman to discuss how enStratus has designed their service for cloud management. We dove into the architectural details at a level we probably wouldn't have had from a sales or marketing person, cutting to the heart of what I needed to know. That was invaluable (PS I recommend talking with them if you're trying to manage private or public cloud projects).
Friday I capped the week with the Security Wineout organized by MC Petermann and Dr. Paul Judge. Besides the obvious good food and great wine, I got to chat with Paul about his latest venture, Pindrop Security, which is a lot more interesting than it would sound at first blush.
So that wraps up another B-Sides SF & RSAC. I learned a whole lot, much of which I attribute to the contacts I was able to make via Twitter. Peace out.
- Comments (0)
2012-02-25 12:15:15 by chort
Just a quick note to let folks know my schedule for RSAC week. I'll be at BSidesSF both Monday and Tuesday all day. Tuesday afternoon at 2PM @JaredPfost and I will be giving our follow-up to the Money$sec talk we did last year. Thursday morning I plan on being at the Securosis Recovery Breakfast and Friday will be the Security Wineout with @pauljudge and @petermannmc
Unfortunately I don't think I can stay for Baysec or the BSidesSF party on Monday night. I might spend some time on the RSAC exhibit floor Thursday, but that's iffy. If you want to meet me, Monday and Tuesday at BSidesSF are your best bets, or Thursday morning at the recovery breakfast. Make sure to mark your calendar for the Security Wineout next year so you don't miss out again!
- Comments (0)
2012-01-19 00:01:12 by chort
A coworker once told me he imagined immigration officials handing Chinese immigrants two bags with slips of paper, asking them to pick a paper from each bag and put them together to form the name of their restaurant. This is how he imagined names like "Green Dragon," or "Golden Lotus," or "China Garden" got created. While it might not be a very accurate way to describe culinary establishment marketing, it is similar to how many users choose passwords. I'm calling this method the "Chinese Take-out Attack."
- Comments (5)
2012-01-17 10:46:51 by chort
You may be aware that the DHS are now sending (opt-in) "Daily Cyber Reports" to IT and security practitioners. The stated purpose of the reports is "to facilitate a greater understanding of the nature and scope of threats to the homeland." I wonder if they're aware of the threat they're creating by teaching people to open PDF documents from unauthenticated email? Well they have no excuse now, because I told them. Here's a copy of the email I sent them on the topic.
1.) Create a DKIM record for hq.dhs.gov and use it to sign the headers of the email, so recipients can verify it was really sent by hq.dhs.gov, rather than a phishing site.
2.) Publish a public key for OSINTBranchMailbox [at] hq.dhs.gov on a website that has a DNSSEC-signed record.
3.) Use the private key (GPG or S/MIME) to sign messages sent from OSINTBranchMailbox [at] hq.dhs.gov
4.) DO NOT INCLUDE ATTACHMENTS, unless they are plain text. Training users to open Adobe and Microsoft documents is the worst thing you can do, when most compromises are initiated with poisoned Adobe or Microsoft documents.
5.) Host the Cyber Report on a website that has a DNSSEC-signed DNS record and an SSL certificate that matches the hostname of the website and chains up to a trusted root.
If you're going to advise organizations on security, you should secure your infrastructure and comms too. Lead through action.
PS you haven't configured your authoritative DNS server properly. The template default value for email address is showing in the SOA.
- Comments (0)
2012-01-02 23:28:32 by chort
Recently I was asked for some pointers on creating a security roadmap. Since there's no one-size-fits-all strategy for which programs or technologies to implement, this is a tough question to answer. After thinking about it for a few minutes, I stepped back and put together this abstract, which is really what security boils down to after all. The rest is implementation details.
- Comments (0)
2011-10-29 16:03:45 by chort
In the last few weeks I've learned a lot about applying GPUs to break password hashes. I'd like to thank @ErrataRob for writing the blog post that got me started in this field. If you haven't read Rob's post, I highly recommend you do that first, because this post builds on it. Don't buy a graphics card until you've read my post though, because there are some important updates.
- Comments (2)
2011-06-27 00:03:05 by chort
For the past 50 days LulzSec has captured the attention of the information security community, the mainstream media, and just about every other kind of media. Has anyone stopped to wonder what it is that causes the LulzSec saga to be so "sticky?"
- Comments (2)
2011-05-26 14:08:33 by chort
It struck me today that events are in motion for unavoidable cyber-conflicts. This statement won't shock anyone, since sensationalists have been predicting "a digital Pearl Harbor" for years. I don't agree with the predictions. In fact, I don't think it's likely that any warfare-like confrontations between nation states in cyberspace will happen in the near future. Sure there's rampant electronic espionage, but that hardly counts as warfare.
I think we're already seeing the beginning skirmishes in far more important events. We've seen protestors in various oppressed countries fighting to circumvent filtering and outright disconnection. We've seen massive DDoS attacks against draconian "Big Content" companies in retaliation for their heavy-handed treatment of their own customers. We've seen resourceful people overcome collateral damage caused by clumsy and ignorant government attempts to censor the Internet right here in the United States.
I don't see these events as anomalies or outliers. I see them as precursors. I think there's a strong undercurrent of opposition to the increasing attempts by governments and extremely large corporations to infringe on individual rights. In spite of that, It seems executives of these corporations are determined to forge ahead with rights-trampling legislation to restrict how individuals can access the Internet.
So what happens when out-of-touch elites try to enforce their will on the vast unwashed masses? That's when you get cyberwar. The people enacting new surveillance and censorship measures are forgetting that digital is the great equalizer. Any kid with a $200 laptop can take down a multi-billion dollar corporation. The more laws Big Content lobbyists have passed to make life miserable for average citizens, the more Anonymous* members they are going to create. It's difficult, although not impossible (as dramatically shown in the middle east this year) to physically resist power. To digitally resist power is nearly effortless. Those in favor of extreme enforcement of content "rights" are picking a fight they cannot reasonably be expected to win. The only question is how long it will take them to lose.
*To be clear, I'm not now, nor do I ever plan on being a member of Anonymous.
- Comments (0)
2011-04-18 22:59:03 by chort
Thus far, all the speculation I've seen regarding the RSA SecurID breach centered on speculation that if attackers could somehow discover the serial numbers of tokens in use, they could derive the seed and whittle it down to 1-factor authentication. The advice from RSA certainly lends credibility to that theory, since they're essentially telling customers to double the length of the PINs in use, exponentially increasing the difficulty of guessing that factor.
If we accept the claim (and I am not suggesting we should merely for being asked to) by RSA that the attack was sponsored by an arm of the Chinese Communist government (let's drop the diplomatic "APT" BS), then perhaps there is another threat vector we haven't considered. As we know, plenty of counterfeit gear is manufactured in China. There is also speculation that what was stolen was not the seed database itself, but the serial-to-seed mapping algorithm. Imagine if they were able to create knock-off SecurID tokens that actually worked, then pollute the supply chain through resellers, and have them end up in organizations that are later targeted for break-ins.
It's clear from past behavior, the Chinese government and/or military are willing to take the long view on industrial espionage. I'm sure they wouldn't mind waiting for this gear to infiltrate high-value organizations. Besides, imagine if they added a few "bonus" features to the tokens, such as cellular radios, and microphones.
No, I don't have any inside information, this is all speculation on my part. This is just an angle I haven't heard anyone mention yet.
- Comments (0)
2011-03-20 20:27:04 by chort
Many security practitioners are familiar with Fail2ban, an application that scans log files for various types of suspicious failures and bans the source IP after too many attempts. Most users implement it to protect their Linux systems (via Netfilter/iptables and TCP wrappers), but it also includes methods for Sendmail and IPFW (FreeBSD and OSX).
What is notably missing from the above list is the wildly popular PF (Packet Filter). It was originally designed by Daniel Hartmeier to replace IPF in OpenBSD, but has since been adopted by FreeBSD, NetBSD, and DragonflyBSD. PF is widely embraced due to the simplicity and clarity of the syntax, and the comprehensive array of professional-grade features available.
Ironically, PF is probably better known now due to FreeBSD than the originating project, OpenBSD. It's somewhat startling that no one has yet included PF support in Fail2ban. It's also disappointing that Apple hasn't switch from IPFW to PF as their packet filtering firewall (hint hint).
In the spirit of the Open Source "submit a patch or GTFO" mentality, here's how you can use Fail2ban to insert rules into your PF firewall.
- Comments (5)
2011-03-05 16:45:30 by chort
Recently I decided to write an application for Twitter to report changes in my friends and followers. As part of the process I went looking for a pre-built library of methods that I could use to interact with the Twitter API. I settled on python-twitter as an actively-developed solution that should keep up with changes to the API.
Due to Twitter's rocky past with SSL/TLS (henceforth simply SSL) support on their web interface, I decided it would be prudent to investigate whether their API used SSL. It turns out that it does, and it has a properly signed certificate. Then I looked at twitter-python to see if it had and option to connect over SSL, and was pleased to notice that it does by default. On a hunch I checked out the underlying library that python-twitter is using to make HTTP requests, and I was shocked at what I found.
- Comments (2)
2011-02-20 14:55:29 by chort
Ready for a shocker? You shouldn't be spending all those resources trying to shore-up your network against attacks. It sounds insane, but this is the conclusion I've reached after spending a week talking to some of the best and brightest minds in Information Security.
- Comments (0)
2011-02-19 21:07:55 by chort
I just took 3 days off from work to attend BSidesSF and the Barracuda Networks Security Wine-out, with an interlude to work the RSA Conference. The following is a rambling summary of the topics and ideas I encountered this week, along with my commentary.
- Comments (0)
2010-11-23 15:45:54 by chort
Surrendering my 4th amendment rights should not be a condition of travel within the United States.
With strengthening of cockpit doors and revised flight procedures to restrict cockpit access, the likelihood of a hijacking being leveraged to use an aircraft as a weapon has been drastically reduced. Couple that with passengers' realization that compliance with terrorists is not in their best interest, the probability of any future airline attack causing more casualties than the passengers and crew on board is near nil.
This means that airplanes are not unique from sports stadiums, shopping malls, trains, buses, subways, cinemas, or scores of other kinds venues where inflicting hundreds of casualties is possible.
We cannot create a police state where every citizen must be viewed naked or sexually groped in order to venture into public places. Stop the Security Theater with airplanes and the inconvenience to millions of people who must fly for their jobs every week.
You may send your own complaint to the TSA here.
PS Of the last 3 terrorist attempts vs. aircraft going to the United States, only 67% were against passenger planes, none of them were hijackings, and none of them went through TSA security. Given those facts, do you really think drastic and invasive escalations against US citizens are necessary?
Update: Thanks to @georgevhulme for pointing out several typos. Also thanks to @mckeay for reminding me that money talks--I've stopped flying short trips (as of last year) due to TSA hassles, and have been driving instead. That takes money away from airlines, pollutes more, and (statistically speaking) causes more deaths. How is this "security" helping again?
- Comments (0)
2010-11-17 11:59:28 by chort
If I were a CSO, I'd go to firms like Securosis for analysis. Why? Because they have a no BS approach. They call out vendors for bogus claims and useless products. People who have been in the security field for a long time and have really looked critically at enterprises and vendors can spot regurgitated marketing spin a mile off. We can also tell when advice being given has no foundation in actual experience.
It seems like the vast majority of "analysis" is simply an indicator of herd mentality. I don't want to know what a bunch of people with no idea are doing; I want to know what intelligent and measurably successful people are doing. The "conventional wisdom" is often wrong. The "best practices" are rarely updated, and usually only with additions of new practices, not subtractions of outdated practices.
That sentiment is echoed by few analysts outside of Securosis, but one of them is Josh Corman from The 451 Group (which has recently hired a few common-sense folks to fill out their ranks). I'm not familiar with The 451 Group's work, but if their hiring practices are any indication (in addition to Corman, they've also picked up Wendy Nather) it's probably solid.
It's about time people started applying healthy skepticism and subject-matter expertise, rather than the modern-day version of "nobody got fired for buying IBM".
- Comments (0)
2010-11-16 23:44:30 by chort
There has been a lot of press and grass-roots coverage of the TSA recently, specifically revolving around the increased usage of backscatter x-ray devices and more invasive physical inspections. Various DHS and TSA officials have made statements to the effect that they're sympathetic to the complaints, but the new measures are "necessary" and they're "striking a balance" between constitutional rights and security.
When I hear someone say "strike a balance" I visualize a see-saw, or a scale of justice, where the two sides are equally weighted in order to balance them. If we were to take the comments by Janet Napolitano and John Pistole at face value, we might reasonably think they're trying to find a middle ground somewhere between completely acceptable (say, passing through a magnetometer) and totally unacceptable (like cavity searches). The problem is that there is no balance. The scale is so far tilted to the side of violating constitutional rights that even a former Director of TSA Security Operations, Mo McGowan, actually admitted these measures violate the 4th amendment.
- Comments (0)
2010-11-03 14:38:57 by chort
I just finished reading @TanAtHNN's 1999 paper contrasting inspection of electrical devices and safes with software and information security products (thanks toJosh Corman for brining it up). The paper pointed out failings of prominent technology associations in the area of certification, and indicated encryption standards (such as FIPS) as examples of how it could be done right.
Overall I think the paper raises good questions. I think you would be hard-pressed to find people in the industry (especially security researchers) who don't think companies should be held to a higher-than-current standard for information technology. I believe the paper comes up a bit short, however in recognizing the differences between physical productions and digital products.
- Comments (0)
2010-04-14 07:57:07 by chort
Ready for a shocker? A lot of the things your IT/Security department makes you do are stupid. According to Microsoft researcher Cormac Herley quoted in The Boston Globe, many "common sense" security practices are economically unwise. In plain English: You lose more money following a lot of security recommendations than you would by just letting the bad thing happen and dealing with the aftermath.
To continue, flip over the keyboard and read the sticky note...
- Comments (0)
2010-04-13 20:12:06 by chort
As many people know, Apple introduced Parental Controls in Tiger. The current version in Snow Leopard allows administrators to block potentially inappropriate content, specific sites, and access to unapproved applications.
The first two work more or less how you would expect (although the error message when a site is blocked for content has been bewildering in my experience), but the application ACLs are a disaster. They prevent the application from being run if it's not approved for that user (in fact, with Simple Finder enabled you can't even see it), but it's when you try to allow a restricted user to access an application that the fun starts.
I haven't examined it in depth, but it appears that OS X adds some kind of wrapper or extended attribute to an application when you enabled a restricted user to run it. The problem is that this extra layer is extremely invasive, and most of the apps I've tried to use it with simply crash. Not only do the crash for the restricted user, but they also crash for unrestricted users. It's demonstrably the Parental Controls that cause this problem, because if you Trash the app and reinstall it, leaving Parental Controls alone, the app will run fine for unrestricted users.
Parental Controls have been around since Tiger, and this problem existed for sure in Leopard (possibly Tiger, I forget when I started using the feature) and definitely still exists in Snow Leopard. So I have a simple question for Apple: Did you bother to QA this feature at all? I know I've submitted the automated reports at least a few times after OS X detected an app crash and it does include audit trail information showing that Parental Control attributes were changed for the app prior to it crashing.
- Comments (0)
2010-03-25 14:59:39 by chort
Apple's operating system has long been considered a refuge for those sick of viruses and malware that plague Windows systems, but this reputation for safety has been widely misinterpreted to mean the design is safe. In fact, as has been widely recognized in the security community, it's the relative rarity of Apple machines on networks that simply makes them an economically uninteresting target.
Apple for their part have enthusiastically encouraged this misconception, and while they've benefited from the positive PR, they haven't actually taken the concept of safety to heart. Much like the corporation in Redmond that they delight so much in mocking, they seem determined to ignore security issues until they affect public perception.
Read on for the ownage ->
- Comments (0)
2010-01-24 23:54:49 by chort
I've been noticing that since I put up this blog I've been getting scans for common PHP files/site layouts. This is interesting because my main site hasn't been scanned for them at all during the same time period.
I also noticed that the majority of the spider traffic to my blog is from Baidu, in contrast with the rest of my site.
I had forgotten how fun it is to scan my webserver logs for patterns.
- Comments (0)