Symantec, you’re doing it wrong…

At work we had an SSL certificate that is no longer required, so we want to quietly let it expire and get on with our lives. Unfortunately Symantec is doing their best to trick us into renewing it. It mentions an “order number” even though we haven’t ordered anything (perhaps their system persistently uses the original order reference instead of generating a new one for the renewal?) and that “payment has failed” even though there was no attempt to renew it. There is also no way to explicitly indicate to them that you want to let the certificate lapse and stop hassling you about it. This is the kind of scam email I would expect from dodgier companies but it seems Symantec can stoop this low too:

Sehr geehrter Kunde,

Ihr Zertifikat wurde widerrufen, weil Ihre Zahlung nicht akzeptiert wurde. Diese Benachrichtigung bezieht sich auf die folgende Bestellung:

Bestellnummer: xxxxxxxxx
Name des Zertifikats: xxxxxxxxxx

Vergessen Sie nicht, dass SSL-Lösungen von Symantec™ dazu beitragen, Vertrauen bei Kunden im Hinblick auf Interaktionsschnittstellen aufzubauen – angefangen bei der Suche, über das Browsen und bis hin zum Kauf. Wenn Sie Ihre Meinung ändern, können Sie selbstverständlich ein neues Symantec™ SSL-Zertifikat erwerben. Loggen Sie sich einfach bei Ihrem Symantec™ Trust Center-Konto ein:

https://trustcenter.websecurity.symantec.com/process/retail/trust_console_login?application_locale=VRSN_DE

Chatten Sie mit dem Kundensupport, wenn Sie Fragen haben oder Hilfe benötigen:

https://knowledge.verisign.de/support/ssl-certificates-support/index?page=chatConsole

Vielen Dank!
Abteilung Kundensupport von

https://knowledge.verisign.com/support/trust-seal-support/index.html

So what is their suggestion to stop these scammy emails? Change the associated email address to /dev/null of course!

Jackson : Good day, how may I help you today?
Martin Barry: We had an SSL certificate that is no longer required. Why do you keeping sending scam like emails “your certificate was revoked because your payment was not accepted”?
Martin Barry: It includes an “order number” like it was an invoice that was not paid correctly, seems designed to trick someone into renewing it.
Martin Barry: It’s slimey and not the kind of thing I would expect from a company like Symantec
Jackson : Hi Martin, what I can do is, if you give me the order number and you agree, I will change the email address to an invalid email so you will not receive it.
Martin Barry: No, that is not acceptable.
Martin Barry: These emails misrepresent the situation and you should stop sending them
Jackson : Unfortunately I cannot stop the system from sending them. One of the solution is what I have suggested which is changing the listed email in the order.
Martin Barry: But it’s not “an order”
Jackson : I was referring to the SSL certificate order issued and you no longer require.
Martin Barry: Can you please file a bug against your “system”? Emails should explicitly state it’s expired, not “revoked”, and there should be no mention of “payment failure” if there was never any attempt to renew it. There should also be a way to explicitly indicate that the certificate is no longer required and all communication about it to be ceased.
Jackson : I see Martin, I will escalate this to my supervisor and liaise with Marketing.
Jackson : For the meantime, would you like to proceed with what I propose so you will stop receiving those email?
Martin Barry: No, I want to see how long you persist in sending them.
Jackson : May I have an order number so when I escalate this I could have an example to reference to?
Martin Barry: xxxxxxxx
Jackson : Thank you Martin, I will escalate this.

Posted in Internet, Tech | Tagged , , | Leave a comment

Historical DNS Quirks

I love understanding the background of how particular parts of Internet infrastructure evolved to be how they currently are and the particular quirks of history that shaped them that way. Last night’s spelunking was triggered by this tweet from @miekg:

…which led him to write up his findings here.

His maths was correct, in that you could fit 14 root name servers in a 512 byte payload, and the presumption that only having 13 was mere conservatism seemed sensible .

But my mind quickly drifted onto the thought that the root name servers used to have unique names under their hosts domain (e.g. ns.nasa.gov) and hence not under root-servers.net which means that label compression saved roughly half as many bytes as is possible now with the shared domain. Those thoughts led to this confusing tweet:

…followed up quickly with:

Along with www.internic.net/domain/named.root and www.donelan.com/dnstimeline.html, another interesting link I turned up was this DNS Root Name Server FAQ and @isomer dug up an old hints file from 1993.

An interesting quote from www.isoc.org/briefings/020/ is why VeriSign operates two roots:

Q: Why has IANA given two servers to VeriSign?

A: This answer needs a little bit of history: When the number of possible letters was increased to 13, IANA asked USC ISI and Network Solutions Inc. to set up additional servers with the intention to move them to suitable operators quickly thereafter. J&K were set up at Network Solutions on the US east coast, L&M at USC ISI on the west coast. Both K and M moved further east and west respectively soon thereafter. However as time progressed, moving a server became subject of increasingly inconclusive debates. Still IANA succeeded in moving L to ICANN. Some say this worked because ICANN was in the same building as both ISI and the IANA, a physical move was not immediately required and operations could be supported by the people operating B already. ;-) More likely it succeeded because ICANN at the time was the only organisation about which at least some consensus could be achieved. After that nothing moved anymore and J remained with VeriSign who had acquired Network Solutions.

Back to my original line of thought, the choice quotes from www.donelan.com/dnstimeline.html are:

21 Apr 1993
Root server list UDP packet size limit exceeded
31 Aug 1993
Bellovin suggests using pseudo-host root.net to pack server list

and

4 Aug 1995
root-servers.net introduced into root zone ns.nasa.gov changed ip addresses ns.isc.org uses net 39 experiment address
1 Sep 1995
ns.internic.net changed to a.root-servers.net (last root-servers.net change)

Basically the old scheme hit the limits at around 8 root servers and, in order to add more, a switch to a common domain was arranged to boost the effects of label compression. Of course, there was still room for improvement:

Posted in Internet, Tech | Tagged , | Leave a comment

Monitoring network traffic: pmacct and Graphite

Recently at work we’ve needed to gain visibility on traffic flows across a global MPLS cloud. I would have explored open source solutions anyway but we didn’t have any budget so I was pushed that way regardless.

The first part of the puzzle came in the form of pmacct, a daemon that uses pcap to listen on an interface and capture traffic data for exporting to other stores (file, database) or forms (netflow, s-flow). We mirrored the relevant switch port and quickly had pmacctd capturing traffic flows and storing it temporarily in memory. Our configuration (/etc/pmacct/pmacctd.conf) is ridiculously simple:

daemonize: true
plugins: memory
aggregate: src_host, dst_host
interface: eth2
syslog: daemon
plugin_pipe_size: 10485760
plugin_buffer_size: 10240

The second part of the puzzle was trickier as we wanted to graph flows but could not be sure in advance of all the flows / data points involved. That ruled out a number of solutions that required pre-configuration of the data stores and graphs (e.g. Cacti). I settled on Graphite because of it’s ability to start collecting data for new flows when it received a data point it had never seen before. After the usual wrangling to get it working on Centos 6.3 the only real configuration required was in /opt/graphite/conf/storage-schemas.conf

[mpls]
pattern = ^mpls\.
retentions = 1s:30d

The final part of the puzzle was the glue to get the data out of pmacct and into Graphite. I wrote a simple Perl script that runs the pmacct client, reformats the data and then feeds it to the Graphite Carbon daemon. We originally had it running once per minute but eventually tried out 1 second intervals and, when that caused no issues, we stuck with that. I’d like to share the script but it has so many idiosyncrasies relevant only to our environment that there wouldn’t be much point. Perhaps if I find the spare time to generalise it a bit more I can add it later.

The end result is fantastic, being able to pull up graphs like this one which is using the Graphite functions to only show flows with maximum throughput greater than 1Mb/s. We already identified and resolved a production issue within the first 24 hours and another few in the first week.

Traffic graph generated by Graphite using data collected from pmacct.

Posted in Linux, Network, Tech, WebOps | Tagged , , , , | Comments closed

Fail2ban monitoring itself recursively

I use fail2ban to monitor brute force login attacks on my server. However it was quite clear that the short bans intended to deter bots but not real users with fat fingers weren’t actually deterring the bots. As soon as the ban was lifted a lot of bots come straight back and keep trying, only to get banned again. It was clear that what was needed was for fail2ban to monitor itself and ban IPs for longer after repeated shorter bans. Of course others had already figured this out so the configuration for my FAIL2BAN filter and jail came from here.

But after a few months of running that configuration it became clear that the bots would just wait out the longer ban and come straight back again. They are never going to get very far testing 15 user/pass combinations a week but those damn kids need to get off my lawn. Enter FAIL2SQUARED. This also monitors the fail2ban log file but it watches for just FAIL2BAN actions. If an IP gets a second ban in a month then FAIL2SQUARED will block it for 6 months.

 

filter.d/fail2squared.conf

failregex = fail2ban.actions:\s+WARNING\s+\[fail2ban\]\s+Ban\s+<HOST>

 

jail.conf

[fail2squared]

enabled = true
filter = fail2squared
action = iptables-allports[name=FAIL2SQUARED]
sendmail-whois-lines[name=FAIL2SQUARED, dest=root, sender=root, logpath=/var/log/fail2ban.log]
logpath = /var/log/fail2ban.log
maxretry = 2
# Find-time: 1 month
findtime = 2592000
# Ban-time: 6 months
bantime = 15552000

Posted in Internet, Linux, Tech | Tagged , | Comments closed

#newnewtwitter Mobile: UX clangers are just the beginning…

So I’ve been using Twitter’s mobile website for a while and it’s been updated a number of times, most recently a few months ago. There was no fanfare until just recently when #newnewtwitter was announced and it became clear that the current look was part of this redesign effort. What surprises me is that this site has 4 fundamental issues that irritate the hell out of me: default landing page; unauthenticated requests for restricted pages; language settings, and; picture links go to normal website.

Some background…

The UX issues (default landing page; unauthorised request for restricted page) have been particularly highlighted to me as I use an old work phone that I have lying around for want of something better. The sad part of this bit of the story is that the phone is running Windows Mobile 6.5. The tragic part of the story is that if you load any decent sized web page it deletes all cookies (I know, I know, #firstworldproblems). So while the UX issues might mildly annoy a normal user, losing the cookies and being repeatedly forced to re-authenticate makes the clangers really obvious and turns the rage up to 11.

Default landing page…

What is the more common task a user of a service is going to perform: registering an account or logging in? For everyone but a spammer the ratio will be 1 to “some really large number”. For a mobile site, where the user is more likely to have registered via other means already, the score will be more commonly 0 to “some really large number”. Common sense would indicate that your default landing page should be for the most common task, so present the user with a login form. Twitter have chosen to make the default landing page their registration form. Logging in requires following a link to the login form. Perhaps Twitter has some A/B testing that indicates this leads to more registrations but for existing users it just seems to be “pessimised” for the most common task.

Unauthenticated requests for restricted pages…

Most sites, when a user requests a restricted page before they have authenticated, have a fairly straight forward and smooth method of handling this.

  1. User requests a restricted page without being authenticated.
  2. Show the user a login form or redirect them to the login page.
  3. After successful authentication redirect the user back to the page they originally requested.

Twitter, in their wisdom, has chosen a different method for their mobile site.

  1. User requests a restricted page without being authenticated.
  2. Redirect the user to the registration page.
  3. User has to click through to the login page.
  4. After successful authentication redirect the user to the first page of their timeline.
  5. User has to manually navigate back to the page they originally requested.

Language settings…

Twitter’s mobile site ignores your language settings and uses geo-location of your IP address to select which language to display.

Picture links go to the normal website…

Don’t get me started on Twitter’s t.co self-serving service, and I know Twitter has no control over other links in tweets, but when someone has uploaded a picture to their pic.twitter.com service they haven’t bothered to offer a mobile friendly way of viewing those pictures, even if you a clicking through from their mobile site. I’m sure newer phones cope better with this but Opera on Windows Mobile 6.5 chokes horribly when presented with the main Twitter website.

Posted in Internet, Tech | Tagged , , , | Comments closed

Great NANOG presentations

I was reminded by @dritans tweet that there a lot of great NANOG presentations which tend to get buried amongst the archives.

The particular one he linked to is A Practical Guide to (Correctly) Troubleshooting with Traceroute [222KB PDF] by Richard A Steenbergen. This is a terrific primer for those who have never dug deep into traceroute tools, how they work and what they can show you. It’s quite easy to misinterpret what a traceroute tool’s results actually mean and RAS steps you through the various anomalies and pitfalls.

The complexity covered here is why asking job candidates about traceroute tools is a great way to expose understanding of, or ignorance of, the basics of packet switched networks, IP and TCP/UDP/ICMP. You can also learn a lot about their approach to troubleshooting and analysis of data, whether they can turn it into useful information and communicate that.

The other presentation I was reminded of is Managing IP Networks with Free Software [400KB PDF] by Joe Abley and Stephen Stuart. It’s getting on a bit, NANOG 26 was in late 2002, but it’s still an interesting showcase of how you can get [nearly] instant results with some simple tools and a little scripting.

I’d like to say that things have changed in the decade since and we now have a one size fits all tool that achieves a lot of the same goals but due to every organisation’s different needs everyone keeps reinventing subtly different wheels. Hence I’ve been down the same path, installing RANCID and then building things around it. I just wish I’d known about textfsm, a Python module for parsing semi-structured text (e.g. ‘show run’ output) into tabular data.

Do you have any favourite NANOG presentations?

Posted in Network | Tagged , , , | Comments closed

Running RANCID on top of BZR and with multihop

Long time, no write. Been busy moving countries, as you do.

Started a new job too. Been setting up RANCID and wanted to pull together all the pieces here:

RANCID
Let’s start in the obvious place, www.shrubbery.net/rancid/. I prefer my own, slightly different, expansion of the acronym. Really Awesome Network  ConfIg Differ. If you are not backing up or versioning the configuration of your networking equipment you really should take a look at it.

Patches for RANCID to use BZR
RANCID only offers CVS and SVN support out of the box. I’ve been using BZR for a while and strongly prefer it. Thankfully someone has provided patches to add BZR support.

Patches for RANCID to do multihop
One of the other things I needed to add was support to reach a device via another device. I used the instructions from here and the updated patch from here. My config looks a little something like:

add user HOSTNAME {USER}
add password HOSTNAME {PASSWORD}
add autoenable HOSTNAME 1
add method HOSTNAME usercmd
add usercmd HOSTNAME {/usr/local/rancid/bin/clogin} {VIA_HOSTNAME}
add usercmd_chat HOSTNAME {#} {telnet IP_ADDRESS\r} {User Access Verification} {}

Note that the host we are going via is already defined so we can reuse it’s clogin details to reach it (though I did need to provide the full path to clogin).

Loggerhead
To serve the BZR repo via a web interface I turned to Loggerhead. The only issue I have is trying to hide the RANCID log directory because I am serving it as a “directory of branches” straight out of rancid/var.

Posted in Network, Tech | Tagged , , , | Comments closed

Series of Scalability Articles by Haytham El-fadeel

As the title says…

Art of scalability (1) – Scalability principles

Art of scalability (2) – Scalability guidelines part 1

Art of scalability (3) – Scalability guidelines part 2

Art of scalability (4) – Scalability guidelines part 3

Posted in Internet, Tech, WebOps | Tagged , | Comments closed

Bernadette McMenamin applying the spin, again…

Australian IT is carrying a blog post by Bernadette McMenamin which is just full of mis-representation and spin.

One of the most horrendous developments that we have experienced in the last 15 years is the dramatic explosion in the global trade of child sexual abuse images on the internet.

No one really knows the true quantities because it is mostly traded via peer to peer and over encrypted networks. And none of these channels will be addressed by the proposed filter. Ref: http://libertus.net/censor/ispfiltering-au-govplan.html#s_stats

76 per cent would change to an ISP that blocked child pornography

There are ISPs that provide filtered access already, yet their market share is not overly large so while the polls track the sentiment it doesn’t appear to flow through to action. Markets respond to demand, it’s clearly not there. Ref: http://libertus.net/censor/ispfiltering-au-govplan.html#s_10

Law enforcement and education are also key strategies and prominent in the Federal Government’s Safe internet Policy.

So why is the AFP budget for this going down and not up? Ref: http://libertus.net/censor/ispfiltering-au-govplan.html#s_38

Hundreds of millions of dollars is already being spent on law enforcement which is commendable but this only addresses the problem after the abuse has occurred.

ISP filtering has the same problem.

Critics of this new scheme have argued that ISP filtering of child sexual abuse images simply will not work. However these filters are actually working very effectively in Scandinavian countries and in the UK as well as in recent trials in New Zealand.

None of these examples is representative of what the ALP is proposing. Ref: http://libertus.net/censor/ispfiltering-au-govplan.html#s_6

Critics have also argued that ISP filtering will be costly and slow down the internet. Again based on overseas experience this is not the case.

The New Zealand trial is not equivalent to what the ALP is proposing. Nor does it refute the view that the filtering will slow access speeds.

My argument is that how can blocking illegal material (which should not be produced or stored in the first place) be censorship?

And many, if not all, would agree with you. But the ALP is proposing to block prohibited material not all of which is illegal. Ref: http://libertus.net/censor/ispfiltering-au-govplan.html#s_21

Having said that I remain open minded as I hope the critics of the scheme will wait until the trials have been independently conducted to decide on whether Australia should take this leap into ISP filtering.

I think we’re all keen to see the results of the trials. Just have to wait till they are completed…

…any minute now…

Posted in Internet, Tech | Tagged , | Comments closed

Links of the Day: December 25, 2008

Gapingvoid: Guy with office job

Gapingvoid: Small shitty moment

Posted in Daily Links | Tagged | Comments closed
  • Operating Instructions

    This site best viewed with a high blood caffeine level and your monitor upside down.

  • Categories

  • Archives