We are aware of a potentially service impacting issue. Learn more
Increase in Covid spam detected (Resolved) Low

Affecting System - Anti Spam Cluster

  • 22/03/2020 19:50 - 08/03/2021 00:57
  • Last Updated 14/09/2020 18:00

We have detected and are in the process of filtering large scale spam attacks directed at our email cluster.

Please move any spam to the spam folder, our system will randomly use sample data from that folder to learn what you perceive as spam.

This will help our engineers in stopping the spread of spam directed to our email clients and 3rd party clients that rely on our neural network anti spam system.

Web Server Failure (Resolved) Critical

Affecting Server - Adam - WEBCA

  • 11/03/2020 16:53 - 11/03/2020 18:00
  • Last Updated 26/03/2020 19:50

Our upstream provider had an emergency maintenance on the cloud head host. Details are below.

This maintenance affected the servers below. No data loss was expected, the drive journals where rebuilt. If you notice any issues, please update support.


• Affected hosts / hôtes affectés :




• Affected instances / instances affectés :










The issue was resolved. Possible data loss could of been present. If you are experiencing issues. Please report it.

SSL Certificate on Load Balancer Issue (Resolved) Critical

Affecting Other - zenithmedia.ca

  • 18/12/2019 12:16 - 18/12/2019 13:56
  • Last Updated 18/12/2019 14:06

Our SSL Gateway Load Balancer for zenithmedia.ca had a stuck task when updating its certificate.

The issue has been resolved, but you might have noticed some downtime of the primary site and portal system.

The issue has been resolved.

DNS Cluster Out of Sync (Resolved) Critical

Affecting System - DNS

  • 10/10/2019 14:00 - 10/10/2019 14:50
  • Last Updated 10/10/2019 14:54

Reported internally

Issue with our DNS cluster being out of sync.


Parsing of configuration files for the DNS cluster had invalid chars

System could not parse the file properly and would double enter entries causing a loop.


Updated template with proper syntax

push template to all servers

re-sync zones

restart services


Everything is operational, default system cache is 900 seconds (15minutes) operation took 12 minutes, no service interruption expected to end users.

Activation of SSL Gateway lb-ca-sslgateway-zm (Resolved) Critical

Affecting System - www.zenithmedia.ca

  • 14/11/2018 06:00 - 14/11/2018 08:00
  • Last Updated 19/04/2019 11:06

We have enabled an SSL Load Balancing Gateway for our primary site of www.zenithmedia.ca this also covers the main user portal at www.zenithmedia.ca/portal

This should improve responsiveness to the backend and allow us to scale with all the demand.

If you're experiencing times or slowdown, please open a ticket or contact your account manager.

www.zenithmedia.net/uptime is the best place to view any possible interruption.

This ticket will remain open until we are fully satisfied with the Balancer and that no issues come up.

Technical Details:

This is the expected result of each Header reply.

Obviously Age / X-Cache & X-Cache-Hits will depend on cache warming

HTTP/1.1 308 Permanent Redirect

Content-length: 0

Location: https://www.zenithmedia.ca/

Connection: close


HTTP/1.1 200 OK

Date: Wed, 14 Nov 2018 23:55:26 GMT

Vary: Accept-Encoding

Public-Key-Pins: pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg="; pin-sha256="grX4Ta9HpZx6tSHkmCrvpApTQGo67CYDnvprLg5yRME="; max-age=31104000; includeSubDomains

Strict-Transport-Security: max-age=31104000; includeSubDomains; preload

Content-Type: text/html; charset=UTF-8

X-Varnish: 8853048 9177459

Age: 0

X-Cache: HIT

X-Cache-Hits: 1

Accept-Ranges: bytes

X-IPLB-Instance: 8892

 -- Update April 19th 2019

Everything has been working flawlisely, closing ticket.

Redis & IGBINARY (Resolved) Low

Affecting System - Nubes

  • 15/03/2019 19:36 - 31/03/2019 00:00
  • Last Updated 19/04/2019 11:05


We are installing Redis, PHPRedis and Igbinary as in-memory cache and PHP optimizations for our nubes platform.

This should increase the in memory catching and optimize the serialization of PHP requests moving away from PHP to igbinary.

Overview of the Igbinary

Storing complex PHP data structures such as arrays of associative arrays with the standard PHP serializer is not very space efficient. The main reasons of this inefficiency are listed below, in order of significance (at least in our applications):

  • Array keys, property names, and class names are repeated redundantly.
  • Numerical values are plain text.
  • Human readability adds some overhead.
  • Igbinary uses two strategies to minimize the size of the serialized output.
  • Repeated strings are stored only once (this also includes class and property names). Collections of objects benefit significantly from this. See the igbinary.compact_strings option.
  • Integer values are stored in the smallest primitive data type available: 123 = int8_t, 1234 = int16_t, 123456 = int32_t ... and so on.

Overview of Redis

Most website frameworks like Drupal and WordPress use the database to cache internal application "objects" which can be expensive to generate (menu trees, filter results, etc.), and to keep cached page content. Since the database also handles many queries for normal page requests, it is the most common bottleneck causing increase load-times.

Redis provides an alternative caching backend, taking that work off the database, which is vital for scaling to a larger number of logged-in users. It also provides a number of other nice features for developers looking to use it to manage queues, or do custom caching of their own.


We will be testing the implementation and monitoring its progress and stability over the Nubes platform. These change technically also effect and add to the alt-php73 package on the ADAM server.


-- Update

We removed igbinary due to conflicts. All other services are running normally.

Entropy Issue (Resolved) Critical

Affecting Server - Lilin - DNSUS

  • 13/03/2019 05:30 - 17/03/2019 03:29
  • Last Updated 19/04/2019 11:04

Low entropy on DNSUS/MXUS is causing the system to hang after an upgrade, the system should come back after "random" has had a chance to build a new entropy cache.

This should be resolved shortly, all other DNS, MX systems are available and traffic should be routed to them automatically.

-- Update March 15th

Upstream provided is investigating the issue, the server is still down until further notice.

Please make sure you have all backup systems (DNS and Mail) in place to minimize any impact.

  • dnsca.zenithmedia.net
  • dnsus.zenithmedia.net
  • dnseu.zenithmedia.net
  • mxca.zenithmedia.net
  • mxus.zenithmedia.net
  • mxeu.zenithmedia.net

-- Updated March 17th 2019

We rebuilt the whole server, upstream had a hickup and the drive was down.

Back in service.

cPanel Comodo WAF Rule 214540 (Resolved) Critical

Affecting System - Entire Cluster

  • 09/05/2018 03:00 - 31/07/2018 06:12
  • Last Updated 09/05/2018 14:40

We discovered an issue with our users who used Google Tag Manager; specifically the following code.

> <!-- Google Tag Manager -->
> <noscript><iframe src="//www.googletagmanager.com/ns.html?id=GTM-XXXXXX"
> height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
> new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
> j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
> '//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
> })(window,document,'script','dataLayer','GTM-XXXXXX');</script>
> <!-- End Google Tag Manager -->

This would trigger a Comodo WAF Mod Security Rule # 214540


SecRule RESPONSE_BODY "<[^a-zA-Z0-9_]{0,}iframe[^>]{1,}?\bstyle[^a-zA-Z0-9_]{0,}?=[^a-zA-Z0-9_]{0,}?[\x22']{0,1}[^a-zA-Z0-9_]{0,}?\bdisplay\b[^a-zA-Z0-9_]{0,}?:[^a-zA-Z0-9_]{0,}?\bnone\b" \

"id:214540,chain,msg:'COMODO WAF: Possibly malicious iframe tag in output||%{tx.domain}|%{tx.mode}|3',phase:4,capture,block,setvar:'tx.outgoing_points=+%{tx.points_limit3}',setvar:'tx.points=+%{tx.points_limit3}',logdata:'Matched Data: %{TX.0} found within %{MATCHED_VAR_NAME}: %{MATCHED_VAR}',ctl:auditLogParts=+E,t:replaceComments,rev:5,severity:3,tag:'CWAF',tag:'FilterInFrame'"

SecRule &REQUEST_COOKIES:sugar_user_theme "@eq 0" \


SecRule TX:0 "!@rx \ssrc=\x22https:\/\/www\.googletagmanager\.com\/ns\.html\?id=GTM|\ssrc=\x22https:\/\/w\.soundcloud\.com\/player\/\?url=" \


This would in turn cause the following to show in the error logs.

> [Wed May 09 10:19:29.618567 2018] [:error] [pid 536577:tid 139855023412992] [client 173.X.X.X:50282] [client 173.X.X.X] ModSecurity: Access denied with code 403 (phase 4). Match of "rx \\\\ssrc=\\\\x22https:\\\\/\\\\/www\\\\.googletagmanager\\\\.com\\\\/ns\\\\.html\\\\?id=GTM|\\\\ssrc=\\\\x22https:\\\\/\\\\/w\\\\.soundcloud\\\\.com\\\\/player\\\\/\\\\?url=" against "TX:0" required. [file "/etc/apache2/conf.d/modsec_vendor_configs/comodo_apache/21_Outgoing_FilterInFrame.conf"] [line "14"] [id > "214540"] [rev "5"] [msg "COMODO WAF: Possibly malicious iframe tag in output||www.XX.com|F|3"] [data "Matched Data: <iframe src=\\x22//www.googletagmanager.com/ns.html?id=GTM-XXXXXX\\x22\\x0aheight=\\x220\\x22 width=\\x220\\x22 style=\\x22display:none found within TX:0: <iframe src=\\x22//www.googletagmanager.com/ns.html?id=GTM-XXXXXX\\x22\\x0aheight=\\x220\\x22 width=\\x220\\x22 style=\\x22display:none"] [severity "ERROR"] [tag "CWAF"] [tag "FilterInFrame"] [hostname "www.XX.com"] [uri > "/en/fr/experts/clinic/403.shtml/"] [unique_id "WvMDcZoDJW7ZSWGe@Ei9hQAAAIE"], referer: http://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0ahUKEwjg-qGO6fjaAhWOl-AKHXaIBcMQFghwMAI&url=http%3A%2F%2Fwww.XX.com%2Ffr%2Fexperts%2Fclinic%2Fst-bruno%2F&usg=AOvVaw2pNPWXqcIe4bE6SHIn9AlG

> <!-- Google Tag Manager (noscript) -->
> <noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-XXXX"
> height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
> <!-- End Google Tag Manager (noscript) -->

As you can see they specify https://

We will escalate this case to Comodo and Google and continue to monitor the situation internally.

Cluster in Europe (SBG) is down (Resolved) Critical

Affecting Server - Cabot - DNSEU

  • 09/11/2017 03:39 - 09/11/2017 06:18
  • Last Updated 13/11/2017 08:20

We have had reports since 1:34 AM EST that our cluster in Europe ( SBG 1 ) is not responding.
It seems like a power outage has been detected and a fault in the generator is causing the whole datacenter to be offline.

Engineers are investigating the issue and it should be resolved shortly.
It seems like the routers are on the same generator and are causing a full network drop due to that. Nothing is reachable remotely.

Here is a link you can follow from our datacenter provider, for updates on this issue.


This does not affect any users at this time, redundancy on our network is in place.
To view our internal uptime and stats please visit:



Looks like an error in the 44 x 100G of fiberoptic bays had a critical system failure, the configuration and backup configurations where lost and all router links at all 6 POPs were down.
A loss of 4.4 Tbps of bandwidth and 100% of the fiber links on that network. Our provider is in the process of upgrading their offering with more redundancy and should resolve this problem.


Isse resolved, Downtime of 1 hours, 38 Mins

*Updated on November 13th 2017

Maintenance of Failover (Resolved) Critical

Affecting Server - MAGI - WEBCA

  • 06/09/2015 05:15 - 06/09/2015 07:00
  • Last Updated 17/10/2017 23:47

We performed failover of our web network this morning and encountered failures from Pingdom, the issue we discovered is that even if we failover correctly pingdom is testing via IP and not hostname.

www.zenithmedia.ca/uptime will show a downtime of 3:45 for September 6th 2015.

We will re-configure pingdom with the changes and our uptime stats should be cleared up to reflect our commitment to 100% uptime.

Issue with GEO IP (Resolved) High

Affecting Other - Primary Site www.zenithmedia.ca

  • 03/06/2015 08:37 - 05/12/2015 09:00
  • Last Updated 17/10/2017 23:45

We have had issues with our primary site loading extreamly slow.
It seems to be an issue with our GEO IP provider Telize, we have switched to FreeGEOIP and the load has been removed.
The issue seems to be with IPv6 resolution, we will investigate the issue and see what can be resolved.

The GEOIP DB is used to generate our company phone numbers you see at the top/bottom of every page. We use this technique in hopes to provide the best contact number based on the location of our clients. Montreal, Toronto and Toll free are displayed depending on location.

We will possibly remove this feature due to the loading issues we have had in the past.

This feature was removed from www.zenithmedia.ca/portal/ due to above stability issues.

Intermittent Phone Issues (Resolved) Critical

Affecting System - voip

  • 24/01/2017 17:13 - 06/02/2017 05:26
  • Last Updated 17/10/2017 23:43

We have been having issues with out telephone system. You can always open a support ticket in your customer portal.


We are investigating and hope to have it resolved soon.

Outgoing call are working; it seems to be an AUDIO issue. You can still call directly to your account managers extention.
Or Press 1 for sales and 2 for support.

You will not receive any indication untill someone picks up the line.

Sorry for the inconveience.

- Jon Wong

--- Updated 02/06/2017

The audio issue has been resolved and the full IVR is online. Sorry for the problem.

The issue was with our upstream provider and it was related to a codex upgrade that scambled our IVR audio voiced by our lovely Kimmy Pops.

If you have issues, please open a ticket.

Billing System Issues (Resolved) Critical

Affecting System - portal

  • 06/11/2016 14:29 - 08/11/2016 15:13
  • Last Updated 01/12/2016 10:02

We are experiencing issues with out biling system; this issue is only effecting the creation of e-Mails.

Some clients might not receive their invoice/quotes or account creation emails. We will do our best to manually send out the required information. We hope to resolve this issue as soon as possible.

Thank you.

- Jon Wong
Founder of Zenith Media

-- updated November 8th 2016

We have resolved the issue, it was due to the code on our automatic login forms. We have adjusted our system and the change in code is confirmed working.

As part of our effort to give back to the community; this cause can be viewed on our github page. www.zenithmedia.ca/github

anycast-dzone2 (Resolved) Low

Affecting System - anycast02.zenithmedia.net

  • 05/11/2016 10:17 - 05/11/2016 00:00
  • Last Updated 05/11/2016 10:22

We have received reports that our status page located at https://www.zenithmedia.ca/uptime was showing anycast-dzone2 as down. We can confirm that this status is a false positive. We did not update the status page with the updated IP address.

anycast02.zenithmedia.net is the offical hostname for this DNS cluster. This particular cluster is part of our partnership with Cira the registar responsible for the .CA domain names.

No Domains / Hosting / eMail or DNS services were effected by this error.

If for some reason you think this effected you, please open a ticket and we will investigate further.

Downtime BHS (Resolved) Critical

Affecting Server - Caspar - WEBCA

  • 04/03/2016 21:51 - 05/03/2016 09:28
  • Last Updated 30/03/2016 10:13

A power outage was experienced at the BHS Data center taking down over 7,000 instances of the Public Cloud infrastructure.

Power bay had a short circuit and fried the whole cuircut. Electrial teams were dispatched and ~2,500 servers are back online.

All our client servers are back online as of 9:30 am.

Affected services where websites only. Mail was received and queued on our Europe servers and are delivering to our clients as we write this.

We are in the process of investigating load balancing and replication of our website infrasructure in hopes of providing realiable uptime to our clients.
No data loss is expected and all system are up.

Please report any problems, due to the nature of the problem and active electrical work being done. It is possible that the servers flap up and down a little.

Here is the offical tweet from the Datacenter CEO - https://twitter.com/olesovhcom/status/706050081997332480

CVE-2015-7547 (Resolved) Critical

Affecting Server - Caspar - WEBCA

  • 23/02/2016 04:52 - 23/02/2016 04:53
  • Last Updated 23/02/2016 04:53

CVE-2015-7547 is a critical vulnerability in glibc affecting any versions greater than 2.9. The DNS client side resolver function getaddrinfo() used in the glibc library is vulnerable to a stack-based buffer overflow attack. This can be exploited in a variety of scenarios, including man-in-the-middle attacks, maliciously crafted domain names, and malicious DNS servers.

All servers have been updated to resolve this issue.

Mail server migration issue (Resolved) Critical

Affecting Server - Caspar - WEBCA

  • 10/01/2016 11:00 - 10/01/2016 12:20
  • Last Updated 10/01/2016 12:24

We have noticed an issue during our last client migration; We found a bug in the migration script not setting email domains locally.
We have resolved the issue and all emails are flowing normally. If you have any email issue related to this bug please open up a ticket (replies via facebook comments are not always monitored)

If you reveiced the following errors;

Your message cannot be delivered to the following recipients:

 Recipient address: you@yourdomain.com
 Reason: Remote SMTP server has rejected address
 Diagnostic code: smtp;550-Please turn on SMTP Authentication in your mail client. remote.email.server.tld [XXX.XXX.XXX.XXX]:43941 is not permitted to relay through this server without authentication.

 Remote system: dns;caspar.zenithmedia.net (TCP|XXXXXXXX|43941||25) (caspar.zenithmedia.net ESMTP Exim 4.86 #2 Sun, 10 Jan 2016 11:38:09 -0500 )

The cause was resolved, please re-send any emails that could have failed due to this bug.


Fiber cut at main Data Center (Resolved) Critical

Affecting Server - MAGI - WEBCA

  • 02/11/2015 11:30 - 02/11/2015 16:30
  • Last Updated 05/12/2015 09:00

Our primary data center in Montreal, Quebec had a fiber optic cut in a tunnel 108km away and experienced 5 hours and 30 minutes of intteruption.
All services are currently restored and everything is back online.

Unfortunetlly this interruption was out of our control and due to the nature of it, our infrastructure was affected, we noticed issued with our backup mail servers and resolved them.

We are currently investigating other backup options to mitigate these kinds of issues in the future.

If you have any questions regarding this issue, please contact us using our Portal.

-Jon Wong
Founder of Zenith Media Canada

Change of HOSTNAME for magi.zenithmedia.me (Resolved) Critical

Affecting Server - MAGI - WEBCA

  • 15/09/2015 02:56 - 15/09/2015 04:00
  • Last Updated 18/10/2015 06:38

We are in the process of changing the hostname of one of our web clusters, this change will have backward compatibility but we request all clients to update their settings.

The hostname is in question is magi.zenithmedia.me its being updated to magi.zenithmedia.net to reflect our network domain. All other servers have been from the start on the .net domain. So no major changes should be expected.

We are in the process of updating MX/TXT/SRV records for ALL client domains and all parent domain records that we manage, unfortunetlly we do not manage all domains for our clients and cannot force change to domains we don't manage. Hense why we would like to inform everyone of the changes.

We will do our best to update all our clients who are effected by this change.


PHP 5.4.43 Patch (Resolved) Medium

Affecting Server - MAGI - WEBCA

  • 15/07/2015 00:22 - 15/07/2015 01:14
  • Last Updated 15/07/2015 02:23

We are in the processes of updating all web servers to PHP 5.4.43 to address security issues reported by NIST.

The National Vulnerability Database (NIST) has given the following severity ratings to these CVEs:

CVE-2015-3152 – MEDIUM

PHP 5.4.43
Fixed bug in mysqlnd library related to CVE-2015-3152



Primary site & portal ip routing issue (Resolved) Critical

Affecting Other - Primary Site www.zenithmedia.ca

  • 09/07/2015 03:16 - 15/07/2015 00:00
  • Last Updated 15/07/2015 00:50

We started investigating reports our primary website and portal system where offline, We contacted our upstream provider and noticed a routing issue after an upgrade to our cloud infrastructure.
This issue did not effect our clients and secondary services (anycast,erp,mail) but this take our primary website and client portal system offline. We reported the issue and it was resolved. If you are reading this, then everything is working!

All orders where suspended during this time. If you tried to place and order, please try again.

If you experience any issues since this has been resolved, please open a support ticket.

As of 15/07/2015 the issue has been resolved and confirmed stable.

HeartBleed Bug (Resolved) Critical

Affecting Server - MAGI - WEBCA

  • 11/04/2014 13:00 - 11/04/2014 15:00
  • Last Updated 30/04/2015 18:01

We have finalized our investigation on the "Heat Bleed" SSL bug and our systems are no longer effected.

Based on our investigation no system was compromised.

Here are the results from our tests.


For more information on the SSL bug you can visit http://heartbleed.com/

DNS Change & Account Move. (Resolved) Low

Affecting Server - MAGI - WEBCA

  • 02/04/2013 08:58 - 02/04/2013 08:58
  • Last Updated 12/02/2015 13:10

Changed the DNS settings on 10 legacy accounts and moved them to our new servers.

No down time.

Apache 2.4.10 Update (Resolved) High

Affecting Server - MAGI - WEBCA

  • 10/01/2015 00:00 - 10/01/2015 01:00
  • Last Updated 16/01/2015 18:21

We have updated Apache to version 2.4.10 as a system performance test. We have removed mod_pagespeed during the tests.
If you rely on mod_pagespeed for advanced filtering, please contat us for alternate solutions.

Apache performance improvement is geared towards HTTPS security.

-- Update Jan 16th 2015

We have re-enabled mod_pagespeed after initial testing. It will be activated by default if you had it previously enabled.

You can enable advanced catching by adding this to your .htaccess

<IfModule pagespeed_module>
ModPagespeed on

For advanced configuration and reference please contact your support representative.

Issue with .htaccess and Apache 2.4 (Resolved) Critical

Affecting Server - MAGI - WEBCA

  • 15/01/2015 18:08 - 16/01/2015 00:00
  • Last Updated 16/01/2015 18:18

We have received reports of people seeing 500 error code when accessing their sites.
The issue is with an option found in .htaccess SetEnv TZ Etc/GMT-5
It is not compatible with Apache 2.4 and should be commented out. This will resolve any issue with the 500 error code you would see displayed.

If you have any issues or are registered as a managed account please contact support. We have taken the initiative and resolved the issue with all our clients.

DKIM Issue (Resolved) Critical

Affecting Server - MAGI - WEBCA

  • 09/01/2015 15:25 - 12/01/2015 00:00
  • Last Updated 12/01/2015 15:28

We received reports of an error with our DKIM validation. Our primary mail server rejected valid DKIM signatures.
We found the bug in an updated ACL rule, the issue has been resolved and mail flow is continuing as normal.

If you continue to receive errors please open up a support ticket.

This issue did not effect our backup mail servers.

Network Card Replacement - FS#7863 (Resolved) High

Affecting Other - Network

  • 05/10/2014 14:47 - 07/10/2014 14:11
  • Last Updated 08/10/2014 19:06

We have detected packet loss from our upsteam provider. We have notified them of the issue and a fix is in place.
Network packet loss was detected at 05/10/2014 02:47:37PM

Quote from our provider:
"A network component is currently malfunctioning, our team is currently in the process of resolving the issue."

No ETA as of present, quick fix was to disable our edge firewall.

05/10/2014 03:03:50PM
  • Packet loss
  • 05/10/2014 03:03:19PM
  • Los Angeles, CA
  • traceroute to (, 30 hops max, 60 byte packets
  • 1 ( 38.268 ms 38.506 ms 38.606 ms
  • 2 colo-lax9.as29761.net ( 1.229 ms 1.510 ms 1.613 ms
  • 3 * * *
  • 4 * * *
  • 5 bhs-g2-6k.qc.ca ( 69.163 ms 69.413 ms 69.519 ms
  • 6 * * *
  • 7 ( 96.413 ms 95.940 ms 96.386 ms
  • 8 vac3-0-a9.qc.ca.vaccum ( 878.145 ms 875.964 ms 876.243 ms
  • 9 ( 105.424 ms 105.612 ms 105.324 ms
  • 10 vac3-0-a9.qc.ca.vaccum ( 887.881 ms * *
  • 11 ( 115.666 ms * 114.575 ms
  • 12 * * *
  • 13 * * *
  • 14 * * *
  • 15 * * *
  • 16 * * *
  • 17 * * *
  • 18 * * *
  • 19 ( 162.096 ms 162.341 ms *
  • 20 * * *
  • 21 * * *
  • 22 * * *
  • 23 * * *
  • 24 * * *
  • 25 * * *
  • 26 * * *
  • 27 * * *
  • 28 * * *
  • 29 * * *
  • 30 * * *

Date and time of analysis: 05/10/2014 03:03:50PM
  • Packet loss
  • 05/10/2014 03:03:19PM
  • Las Vegas 4, NV

  • traceroute to (, 30 hops max, 60 byte packets
  • 1 72-46-153.static.versaweb.net ( 0.194 ms 0.647 ms 1.317 ms
  • 2 ( 0.971 ms 1.624 ms 1.893 ms
  • 3 any2ix.coresite.com ( 7.733 ms * *
  • 4 * * *
  • 5 bhs-g2-6k.qc.ca ( 79.350 ms 79.942 ms 80.188 ms
  • 6 * * *
  • 7 ( 81.506 ms 81.840 ms 82.075 ms
  • 8 * vac3-0-a9.qc.ca.vaccum ( 840.338 ms *
  • 9 ( 86.970 ms * 86.049 ms
  • 10 * vac3-0-a9.qc.ca.vaccum ( 841.078 ms *
  • 11 ( 89.305 ms 90.774 ms 89.258 ms
  • 12 * * vac3-0-a9.qc.ca.vaccum ( 859.539 ms
  • 13 * ( 96.724 ms 97.212 ms
  • 14 * * *
  • 15 * * ( 99.446 ms
  • 16 * * *
  • 17 ( 102.100 ms 102.505 ms 102.832 ms
  • 18 * * *
  • 19 * * ( 105.513 ms
  • 20 * * *
  • 21 ( 111.665 ms 113.031 ms 113.365 ms
  • 22 * * *
  • 23 * * *
  • 24 * * *
  • 25 * * *
  • 26 * * *
  • 27 * * *
  • 28 * * *
  • 29 * * *
  • 30 * * *
Replacement installed.