Hacker Newsnew | past | comments | ask | show | jobs | submit | ehynds's commentslogin

Here in the northeast, electricity is expensive because we rely heavily on natural gas for power but lack sufficient pipeline capacity to bring in cheaper supply, all while nuclear plants are being retired, politicians have blocked new pipelines from Canada, and the Jones act makes it costly to transport fuel by sea.

I'm sure AI isn't helping but we have plenty of problems already


And all those subsidized heat pumps and solar panels our governments make subsidy programs for are paid for by rolling those costs into our transmission/distribution/delivery fees.


About $300/mo on an app that provides real-time train tracking for Boston's commuter rail. Thankfully for me the MBTA is a mess, so each system meltdown (which happens about once a week) causes a spike in downloads.


OP is saying that `let func = () => {};` creates a named function where func.name equals "func" and "func" appears as part of the stack trace.

Your example is still an anonymous function just bound to a different context.


and more verbose as well which was the concern of person to whom I replied.


That seems to do it.

  let func = () => {};
  console.log(func.name); // func
and in stack traces:

  let func = () => foo();
  func()
results in:

  Uncaught ReferenceError: foo is not defined
  func @ test.js:1


Wow. I had no idea that would work.

All this time I've been using the full named function expressions over arrow functions just for the stack traces.


Still though, in situations where I care about the stack trace, I don't see myself writing:

  let doStuffCallback = () => {};
  doStuff(doStuffCallback);
over:

  doStuff(function doStuffCallback() {
  });


I thought there should be a Broccoli plugin, and then I found this: https://github.com/sindresorhus/broccoli-uncss


I would also like to know that this isn't going to share the results to my profile.


We won't, but we will make that promise explicitly on the homepage. Thanks for your feedback.


and by disabling pinch zoom on mobile.


Absolutely. Takes the most simple-to-render website and makes it unusable for an entire class of device, just by adding:

    <meta name="viewport"
          content="width=device-width,
                   initial-scale=1,
                   maximum-scale=1,
                   user-scalable=no">
(wrapped for your viewing pleasure)

Why would you ever do this on a page designed for readability?


And one finger zoom on Chrome mobile. Any zoom really.


Are they already open / matched by wildignore?


Digging through the source reveals a konami easter egg. Unfortunately all it really does is break the page though.


You pointed out a few best-practices, but it's a shame that this article is written in the context of jQuery, rather than recognizing that these "taxes" are true with every script tag you place on your website.


That is absolutely true, was looking at the cnn.com .. http://z.cdn.turner.com/cnn/tmpl_asset/static/intl_homepage/... which is in the header chews up 100k of compressed js. This monster must take upwards of 20ms to parse in chrome as well. Way worse in crappy browsers, the CDN network you need to support this is huge.


In defense of sloppily composed news websites: they're not run like normal sites. Despite the obvious benefit of optimizing and cleaning up the site markup, it's quite normal and it's even encouraged to sacrifice optimization -- so that something would just work.

Two main differences from normal sites are:

(1) The sheer size of moving parts: number of hands that have a say in the site's daily operation

Before anyone says "duh, well, get a handle on your site" you have to understand that news world is chaotic for the right reason: content flexibility and innovation. You can't put hundreds of ongoing creative and newsworthy projects on small development teams. That kind of open exploration belongs in hands of editors who specialize in their particular beats and are willing to pursue newsworthy projects (vendor or agency partnership, special reports, etc.)

These people only understand rudimentary HTML but they have access to be dangerous.

That's a good thing.

More people that do this sort of thing is good for any news agency: it gives us a collective chance for any random project to be naturally selected by audience for success. Problem is how to corral various implementations, bandaids, and so forth to where it all works flawlessly.

However, cutting off editors from being able to write markup is not the solution - and post-optimization regression testing is impossible due to sheer volume of content and creative projects. As long as something works within acceptable limits, it's fine.

(2) Politics of how technology intersects with editorial and operations.

There are invisible lines of power that outsiders don't see with news. Bureaucracy alone is tough to navigate, but, compartmentalization of editorial, technology, operations teams contributes to the problem. Most news agencies have been transformed over the decades (in some cases, centuries) and are pretty set in their ways.

Explosion of internet technology has dramatically sped up this transformation process and it'll take some time to iron out. Good news? Rise of devops in the news biz is happening.

Though I'm a fairly adept developer working for a news agency, I cannot update or fix our favicon file or optimize JS on the main page: for one, template control belongs to a different department that will prioritize their projects differently than I. Communication overhead is too large for small matters. In their defense, their lean team is supporting 50 languages, 1000-persons worth of editors and producers at this point.

Secondly, even if something is simple to fix and done for the right reasons, navigating this field of artificial obstructions just to reduce page weight or load time doesn't justify me taking time off from working on my latest editorial project: there is a fast approaching relevance deadline on it.


Where are you getting these numbers from?


My laptop which could be a bit faster, the cold cache issue is the major issue though. Distributing a file this big in a scalable way without blocking requires a lot of servers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: