This has been reposted so many times by the author and by others that I can't help but finally ask.
What's the point? This would lead to your API being comprised of blocks of HTML which are probably only useable for one product. Why not just use REST + JSON? It would take no more than five minutes to set up client-side rendering, and you could even make it attribute-based like this with barely any more effort. Is it really not worth spending the extra five minutes it takes to set things up in a way that is reusable and standard? All I see is piles of legacy code being generated where it hurts most - in the backend.
This took me 10 minutes to cook up. It would have taken about three if I hadn't forgotten the jQuery and Handlebars APIs. This allows you to POST to a JSON API using two attributes. Untested of course, but you get the idea:
> This would lead to your API being comprised of blocks of HTML which are probably only useable for one product.
Because for probably the majority of developers there IS only one product.
Here's what I think is happening. A bunch of us are working on websites that need to scale across multiple users and adapt to multiple clients and will have to grow and pivot as the business needs change. And they've learnt the hard way about building scalable systems.
But what is going horribly wrong is that their war stories and best practices and tools and processes are being used by a bunch of us who will never face those problems. And we're paying the price in terms of complexity for problems we aren't ever going to face.
If you're building a SaaS for a startup that might reasonably expect exponential growth, unfortunately your advice is being taken to heart by people building blogs and small web-shops and we're seeing some terrible technical decisions being made because everyone wants to do things 'right'.
We need a healthy dose of YAGNI drilled into people. Whatever happened to progressive enhancement? It still suits the vast majority of web projects perfectly well.
I'd also say that JSON APIs either tend to get tuned for specific UX needs or become more and more expressive. The first option calls into question the general, reusable nature of the API, and the second introduces security issues in an untrusted computing environment[1].
I typically split out the JSON API for my system from the web application proper so that my web application needs don't screw up the public API of the system. They end up being two separate concerns with different shapes, authentication methods, etc.
Customers aren't going to use GraphQL for a public API, so you've just added another (micro) service to talk to the GraphQL server and translate it into a more traditional REST API (or the other way around if you want your internals to be REST). Also, you've pretty much forced an entire ecosystem of complexity onto your front end.
And then you've added a ton of complexity, bringing us full circle.
Based on the parent comments, we're talking about simple websites where serving HTML through AJAX is an effective approach. In the name of "purity", you've introduced JSON APIs, GraphQL, (most likely) React and Relay, more complex deployment with multiple services, probably a complex front-end buildchain, and who knows what else.
Right now this isn't feasible. People are used to REST APIs, and the tooling and documentation (particularly outside of use with React and Relay) for GraphQL isn't where it needs to be for it to be feasible for a customer-facing API.
I started out agreeing with your sentiment, then found myself disagreeing with the premise behind it.
Yes, a lot of companies are founded around a single product or family of products. They can afford to use a building block that isn't re-usable across different projects. They're not making things for public consumption, or have multiple very divergent codebases.
But these aren't small web shops, and people building blogs. In the DC area at least, the above describes every midsized company ( midsized as in > $1 monthly revenue < $50 million monthly revenue). These guys are makers of the software that runs in doctors offices, hotels, non-profits, political organizations, and government contractors.
They're already bigger than most SasS companies in SV, can ever expect to be.
I oversimplified to make a point. If I could rephrase it in more general terms it would be something like "people in web development are taking engineering advice from people who are solving completely different problems" or maybe "best practice in web development is being framed by the atypical"
The short answer is: because the web architecture was and is fundamentally different than the 80's style client/server model you are advocating, and it has specific advantages that are being lost in the transition back to that model (security, scaleability, simplicity, etc).
In the first article you raised two issues about REST: Developers disagree on what it means to be "REST-ful", and HATEOS never took off, despite being the feature that distinguishes REST from other APIs.
In the second article you explained that HTML could implement HATEOS.
In the third article you argue that GraphQL is the natural progression of REST but its security model is complex to the point of being unsafe.
---
The first article, in my opinion, consisted of straw men arguments.
The second is understandable.
The third made no sense to me. With a system like GraphQL you can use a declarative column-based security model. This is, in my opinion, easier than the imperative stuff you're probably using to make your HTML endpoints secure. With GraphQL you need to set up your security constraints once. With HTML endpoints you need to remember to toggle off certain blocks of HTML for every single request. Is that what you're doing?
What were the strawmen in the first article? It is observable that REST/HATEOAS (HATEOAS in particular) are falling out of favor in JSON-based APIs. This is understandable because JSON is not a hypertext, and the rest of REST isn't amazingly useful without it. Where's the strawman?
I'm glad the second article makes sense.
The core point of the third article is that when you increase the expressive power of a JSON API (with something like GraphQL) you are putting this power in the hand of the end user, not just your developers. This is not the same as giving, for example, full SQL access to your server-side only developer, where the code is executing in a trusted computing environment.
> With GraphQL you need to set up your security constraints once
I'm only superficially familiar with GraphQL, (and not at all familiar with Intercooler), but I always felt that security was glossed-over and not a core part of what it offers.[0]
Authorization is challenging enough on the server, but having a query-language power client-side, feels like a pretty fragile thing to me to secure properly. Definitely not something you just set once and forget about...
[0] http://graphql.org/learn/authorization/ - if I get it right, it gives a good example of row-based authorization and essentially tells you to figure it out for yourself in your business logic layer.
> This has been reposted so many times by the author and by others
The author did post a story about intercooler four times in the last 3 years. Where is the problem?
> This would lead to your API being comprised of blocks of HTML which are probably only useable for one product.
Whats the problem with that if their is no need to reuse the API, or if there is no API at all, just a bunch of uncool php scripts?
> It would take no more than five minutes to set up client-side rendering?
Is client-side rendering better per se?
> Is it really not worth spending the extra five minutes it takes to set things up in a way that is reusable and standard?
What standard do you refer to?
I think it is obvious that this lib isn't meant to be the foundation for the next facebook app, be a contender for the current hip bloatware frameworks or might be the best lib for single page apps generally. But I can see lots of use cases where this lib will help to spice up some projects without getting burden under some boatload of unnecessary tooling and boilerplate codes.
The author did post a story about intercooler four times in the last 3 years. Where is the problem?
Look at the author's comment history, rather than story history. He has a reputation of plugging intercooler in every discussion related to JS. Not that I mind; but most of the "reposts" that GP is talking about are comments, not actual stories.
> What's the point? This would lead to your API being comprised of blocks of HTML which are probably only useable for one product.
Only usable for one product? There aren't many platforms without UI layers that have some kind of component for rendering/presenting HTML these days. But even for those that don't:
> Why not just use REST + JSON?
Markup is as much a data-exchange format as JSON is. If you're writing/generating it well, anyway.
I'm not saying never use JSON (I have and do). I am saying it's a little weird, though, that we've somehow got to the point where anyone who's commenting on this discussion has forgotten that markup has done and can do the job JSON does, or that we're at a place where as an industry it seems weird or potentially an interop problem to serve markup instead.
Plus, unless I'm missing something, the typical case is just:
if accept==html then return html(template, data)
else return json(data)
Which is sure as hell less work than setting up React/Angular and a bunch of other junk on your HTML frontend to consume JSON and turn it into HTML in the browser. Probably higher performance, too, since JSON-consuming HTML frontends in the wild don't seem to exhibit (putting it mildly) the performance improvements that AJAX originally promised—then again, that X was XML (and remember XHTML? I, for one, really liked it) so it's not necessarily AJAX's fault that we've decided to rube-goldberg up the web the way we have.
Not everything needs a JSON API so any client can talk to it. You're also underestimating setting up client side rendering as my experience tells me otherwise.
Most of the time a django/rails rendered backend is desirable and easier to maintain over the long run.
> Is it really not worth spending the extra five minutes it takes to set things up in a way that is reusable and standard?
HTML is standard. Even more importantly it has rich and standard semantics. Your homebrew protocol based on JSON is not and does not. You write an interpreter for it every single time without even realizing it.
The thing you cooked up in 10 minutes does not follow the principles of progressive enhancement. This immediately closes up lots of doors.
> All I see is piles of legacy code being generated where it hurts most - in the backend.
No problem. It's just views. If the need comes to make it general, and output JSON from everywhere, you just write another set of views. They are discardable. (And you'll probably need different URLs for the JSON anyway, so you'll probably keep both sets.)
So work is heavily bought into the idea of rendering the client-side content in front end services (what you call the backend). It lets devs change user-facing behaviour without invalidating cached minified assets (which saves money at scale). Plus, we can test client side behaviour in just a few browsers, and be reasonably confident that everything else will roughly be the same.
It works for us because the site is extremely static…
for a lot of the client side apps I create, I want the response to trigger potentially multiple listeners that are used in a composed fashion. It's a lot less efficient to write 1:1 logic for each endpoint.
I was thinking the same thing. The only reason I think to use Intercoooler is for very simple scenarios, or for people who perfer the server side programming and want to minimise JS programming. Which I can relate to to some extent.
Having said that, I've been doing a side project in KnockoutJS and it isn't much harder than IC. You mark up the HTML and need a little bit of JS to call the api. So it would be like your example but with a line or two removed. And like your example as soon as you desire to do something custom you just add some more code.
On the service side you emit JSON instead of HTML and this can be easier IMO - virtually every language has a JSON library and you just convert your ORM objects to JSON (or just use them as-is!). That is less work than rendering HTML I would have thought.
you nailed it. I don't want to do JS. I know enough Python to get by, I'm writing a small webapp with some API hooks (internal webapp) and I'd like to have some AJAX-y components. While I don't know if this is powerful enough for everything, the idea of "click on <button> to retrieve data and place in <div>" is really awesome. Doing it in HTML-only is much easier for my peon-non-genius mind.
I agree. There are instances where I think returning HTML instead of JSON works (think of Rails Actioncable or Phoenix Channels), but I have a hard time conceptually understanding what problem is solved by this library.
Thinking of almost all of the full stack, I feel like you could have more flexibility with something like Rails with Angular on the front-end. You can set up a Rails controller to return JSON or HTML depending on the request type.
Throw in UI Router and have your Angular routes match your Rails routes... You can have your pages rendered server and client side and still have the flexibility to render JSON.
It would be sick if there was a stack where you can take make your routes more functional and have the routing work clientside and serverside. I know Turbolinks is supposed to solve this problem, but it sort of messes up how events are fired.
Also, Angular templates cannot be modified on the front end... but these intercooler attributes can.
I will admit, the author is probably solving a problem that I haven't encountered or even fully understand. If people are digging it, more power to him.
I work primarily with a legacy, enterprise PHP application and it is a monstrosity (large number of classes, breaking framework changes, deprecated front-end libraries, very old MVC patterns, heavily imperative).
There are elements that Rails (predictable routing, simplified MVC, templating rendering, and general ease of set up and deployment) and Angular (flexibility, directives, safe templates, state based routing, imperative but can be functional with filters) provide the feature set that I have described above (aside from the routing object idea for generating server and client side routing).
The underlying technologies are not important; Node already has isomorphic javascript. I think Phoenix might be on track to remove the need for a heavy use of javascript for creating a single page application by simply using channels.
However, I cannot speak on Node and Phoenix because I lack the experience with these languages to comment. I do have experience with Rails and Angular to generate an example of how I would like a full stack, AJAX heavy application to work.
Why? There are plenty of developers that use Angular.
If I speak from a place of experience, is that less valid if I use a language that you do not particularly like?
It is not always the correct language in a lot of cases, but it can be for creating the feature set that I have described above - an approach for a more functional application structure that would provide server and client-side routing without Turbolinks.
You don't even need Angular or Rails for this, but these are both tools that I have enough familiarity with to know that it would work.
Because 1) the appeal of intercooler is its simplicity, and 2) I don't know angular and don't have any desire to learn it. It looks complicated, and on the decline: http://stateofjs.com/2016/frontend/.
It's not a language it's a framework. And it will take me (and others on my team) time to learn it, and maintain it in the future. But if you like it, more power to you!
They can only reuse the same tropes so many times. Season 3 takes most of the "mind blowing" ideas from the first series and repackages them in a weaker presentation. The first episode was a weaker repackaging of Fifteen Million Merits (intrusion of social media rankings). I think there were two rehashes of White Christmas (protagonist being in an alternate universe where time moves slowly, which is always revealed a the end of the episode). White Chrismas did this much more effectively. Season 3 is just a thin shell around the same tropes.
Could it be that the first 2 seasons were British. And 3rd season is first time being British-American so that felt that revisiting popular ideas with a new spin the timing was good?
And really, Fifteen Million Merits wasn't all that great either --- it was kind of all over the place narratively, and once you get the intrusive ranking trope, there's really not much more interesting about it. It felt sort of like badly plotted Vonnegut.
Again: if you haven't seen the MeowMeowBeenz episode of Community: it's a pretty good chaser to either of those episodes.
I have to disagree. Fifteen Million Merits is still my favorite episode, and what's more, I don't think it's about the intrusive ranking at all (meta-point: one of the best things about Black Mirror, IMO, is that people will often disagree on not only the quality of an episode, but what the episode was even about. That's how you know it's good).
I think the moral of that story was the inherent futility of ever rebelling against the System. Most people don't try, most who try fail, and even for the one-in-a-million who succeed, the System just instantly reconfigures itself to absorb your rebellion as part of it. Other works have done the "revolutionaries always become the new dictators" shtick, but Fifteen Million Merits did it better because it shows that this process doesn't even require a villain (like in, say, Animal Farm). Everyone in that episode, even the Simon Cowell stand-in, was just punching a clock and doing their job; the suppression of revolt was purely an emergent property of the system... as it usually is in real life.
It was the bleakest, most nihilist bit of television I think I've ever seen, made better by the fact that it wasn't empty "dark for dark's sake" like Twilight Zone-esque twist endings tend to be. I had to go for a walk afterward. I don't see how you can't like it... but like I said, the fact that people can disagree so profoundly is one of the show's best qualities, so I'll upvote you anyway and not hold it against you :)
The real hook wasn't the lame virtual-points Zynga oppression, it was how the main character reacted to their situation and then how the system reacted to them.
It's all ground that was covered before (and arguably better) in Network 40 years ago, but it does it well.
"MacBook Pro (15-inch, Late 2016) draws up to 85W. Use the Apple USB-C charge cable that came with your MacBook Pro, or a certified USB-C cable supporting 5A (100W), to power and charge your MacBook Pro (15-inch, Late 2016) at its full capability.
"MacBook Pro (13-inch, Late 2016, Four Thunderbolt 3 Ports) and MacBook (13-inch, Late 2016, Two Thunderbolt 3 Ports) draw up to 60W."
…
"You should not connect any power supply that exceeds 100W, as it might damage your Mac.
"Using a power supply that doesn't provide sufficient power can result in slow or delayed charging. It's best to use the power supply that came with your Mac.
"MacBook Pro can receive a maximum of 60W of power through the Apple USB-C VGA Multiport Adapter or USB-C Digital AV Multiport Adapter. For the best charging performance on MacBook Pro (15-inch, Late 2016), connect the power supply directly to your Mac."
What a disaster. The tail wagged the dog in the .NET Core team. Every step of the way it became clearer that this was a pet project of some ivory-tower programmers and that we would be left to patch together this mess. Try adding a .NET Core project on your build server. You'll hate yourself by the time you get it building.
Really? We haven't had any problems with our .NET Core projects (a few web apps running on core and a bunch of libraries targeting core). In addition, ASP.NET Core (on .NET core) has been a great stack so far (using it since beta5); I feel very productive with it as well as enjoying the development experience. Why do people need to get fired for this? It doesn't seem like it will be hard to move from project.json back to (a new and improved) csproj.
Yes, Really, and I am sure there are some people who have had no problems. That doesn't invalidate the countless hours I spent making this dreadful system work. Did you put it on your build server without installing Visual Studio on it?
Our build server (TeamCity) does not have VS installed.
I can actually tell you the exact software installed on the build server:
1.) Window Server 2016 (Base OS)
2.) SQL Server 2016 Express with LocalDB only (used for longer integration tests that need to validate migrations)
3.) DotNetCore.1.0.1-SDK.1.0.0.Preview2-003133-x64.exe (.NET core SDK 1.0.1 download from https://www.microsoft.com/net/download)
4.) BuildTools_Full.exe ("Microsoft Build Tools 2015 Update 3" https://www.visualstudio.com/downloads/#d-build-tools). If you have this, then you don't need the full VS install
5.) NodeJs (for running webpack as part of 'dotnet publish')
6.) TeamCity and Octopus Deploy agents
>Did you put it on your build server without installing Visual Studio on it?
Huh?
`docker run microsoft/dotnet`. Took about 2 seconds to get it running on my Jenkins build server. In reality, it took literally no effort. My entire full stack app requires `make` and `docker` to build and nothing else. The backend is an AspNetCore API.
Everyone's been having problems with them, how can you have had none?
They seem to massively change something every 10 minutes.
I also don't get the 'massively productive' comment, in essence very little has changed since, hell even MVC 3, apart from the obsession with DI and a load of packages now don't work. If you weren't 'massively productive' before, you were probably doing something wrong. They just moved a couple of things around and now insist you use npm/bower/whatever hotness they latch onto next week instead of nuget. I'm sure we'll have another blog post about them ditching support for anything but yarn next week.
And how long before they change their minds about how controllers work again, it's been like 4 times in the last 5 years now. First MVC controllers, then Web API, then Web API 2, then OData, then .Net Core, all very similar but slightly different with slightly different ways to register them at startup and slightly different things available on the context. Apart from the utter madness that is OData, great for data tables, bloody awful for everything else.
It's getting almost as bad as javascript churn but it's one company so it doesn't make any sense.
It changed during the pre-1.0 era when the team were very clear about the potential for changes. The big changes were made with plenty of notice and short term backwards compatibility baked in. MVC 5 is still there and still being supported - deliberately choosing to use Core should have been a calculated decision based on what you needed and appetite for risk. If there's a bad influence from the world of JS it seems much more around devs deciding to go into production with "the new hotness".
Moving to npm/bower for front end stuff makes perfect sense, NuGet is a horrible way to deliver client side code, wasn't supported by the majority of library producers and was unknown by anyone who didn't have a .Net background.
I expected changes at the beta stage. I did not expect the major changes between RC1 and RC2. Try writing a book against a moving target! :)
I think all the changes now are due to that late switch of direction. RC2 went straight to 1.0 but it was a major change and should probably have had some more beta/RC releases.
It's not just change. It's difficult to make it work for a single version. It was difficult back when the build tools were a handful of powershell scripts. It was difficult when they moved to the dotnet tooling.
Microsoft shouldn't push out an RTM product and call the broken bits "Preview" to absolve themselves of the responsibility to deliver a reliable product.
I was evangelical about .NET Core. I stuck with it for years. I stuck with it when Damian admitted that he "doesn't build web apps". I stuck with it during the Release Candidates. It's only when they pushed this out the door and called it RTM that the penny dropped and I accepted the project had been mismanaged.
They announced all of this before .NET Core went RTM. Not only that, the tooling was never made RTM (still in preview). If you jumped on the bandwagon that early then you had all of this information and knew what was coming.
You should just wait until both the framework and tooling are RTM if that's the stability you need - the framework is already great but if you need the tooling then wait until it's ready.
Why complain if you understood the status and assumed the risk?
If you knew anything of the traditional release history for Microsoft you wouldn't have commented. All your post demonstrates is total ignorance of the context or history.
It was previously totally fine to use RCs for the last decade, probably way before that too, but I only have experience from then. RC in MS speak meant "pretty much finished, API won't change unless we find something desperately wrong".
Then it suddenly means basically pre-alpha, "anything can change, massively, between RC versions".
> They seem to massively change something every 10 minutes.
> in essence very little has changed since
?
The one thing Microsoft is great at is compatibility. Your existing projects can keep using the older frameworks and they're still supported and will run just fine. What is the issue? If it's about learning the new changes, then it's probably best to wait until everything (framework + tooling) are final so you only have to learn once.
I honestly have no idea what point you're trying to make.
The problem is that it's hard to understand what does what and how it does it. Things in even the same project behave differently. They introduce massive architectural changes and then abandon them. Searching for something in SO now is a crap shoot, which slightly in correct answer will you get because the asp.net team changed their mind again?
And you can have all these things running in one project, all acting differently.
I work with clients on MVC 3,4,5, with parts in web API 1 and 2 and parts in odata.
And they all behave differently.
For example, and this is just one of many, a JSON date from an MVC controller is 'Date(173737273)', from a web API it's '2016-12-26 23:00:00.0000', from an OData controller it's '2016-12-26 23:00:00.0000Z'. That Z changes behaviour btw and makes the terrible assumption your date is in the same TZ as your server.
All of them are parsed different and will return different dates in javascript.
That's one thing I don't understand with .net core. You deploy the whole .net framework along your code which then becomes static. And this is targeted at web applications primarily.
What about security vulnerabilities? Unless the developers redeploy the application we will be left with an unpatched .net stack and unpatched web server?
How is that a good idea? In an ideal world there would be an active dev team busy redeploying and patching every day behind each website. But people who think we live in this world haven't really followed the pretty much constant stream of news about major breaches because of unpatched versions of software being used everywhere.
My dotnet core containers are rebuilt from scratch and redeployed every 20 minutes or so. If there's a security bug in the base OS layer of my container or in the .NET runtime, I'll have it in less than a half hour.
Not too hard, if you have some standard practices in place.
I always enjoy wondering what sort of person downvotes a comment like this. I can't help but feel it's someone with shame/guilt for not having CI/CD in place.
But yeah, if that's too much to ask for, just don't ship the framework in your project. You can still have it installed as a system package. But to be frank, it's nearly the same problem, just in a different spot.
The .net framework of the OS gets updated with Windows Update automatically. It is not the same problem at all.
I obviously didn't downvote but I do understand the downvotes: it is unrealistic to expect every website to be actively maintained forever. I am sure he deploys a new version every 20 minutes of his current project. I would be curious to know how many versions a year he deploys of the projects on which he worked 5 or 7 years ago and from which he moved on.
The world is filled with legacy applications, libraries and websites. Pretending that the code we write today will always be actively maintained and supported is just unrealistic.
Probably not a good idea to update the framework without the application knowing. This could cause bugs and unspecified behavior. It's best to test your application against new versions and then deploy a new version of the app.
The .net framework is pretty much always 100% backward compatible. Again not a problem if you will have an active team maintaining the code. But can you even count how many dead applications and websites you have encountered in your career? At least if the OS gets updated, the application benefits from security patches.
Same nuget been a huge pain in this. I would have been nice to have a ready made msbuild script that would result in web deploy packages (we release with on premise TFS RM) but we were able to make one manually. I expect .net core to have better integration with TFS soon (if not already in the next release)
> How quickly should it be doing the loading of that specific set of files
Sub 500ms? Read the solution, read each project file, read the first level of file and folder names in each project, then show the solution explorer. Anything else can go in a background thread which doesn't block building, running, or opening individual files.
I would like to see where the other 29,500ms are going.
Precisely this. Crystal Reports used to (I don't even know if CR still exists, tbh) display each page of a report as it rendered it. This made the app very productive compared to every other reporting tool we tested (at the time). They all rendered ALL the pages before showing even the first. This is deplorable and anti-user.
Visual Studio should read in all the files, restoring to open the ones I had open last time and allow me to start editing immediately while still loading the rest in the background. Also, Intellisense data should be cached, because if the load time of 30 seconds is because it's prepping that each time, that's a waste.
> Visual Studio should read in all the files, restoring to open the ones I had open last time and allow me to start editing immediately while still loading the rest in the background.
It's already done this for a few versions -- a feature named 'asynchronous solution load'. You're able to open a file and start editing while the X hundreds of projects load. However if you do a Navigate To, Find In Files, etc. among other operations, you'll get the modal progress bar because those operations require having the entire solution parsed. But editing and other simple operations are available while the projects are loading (which takes up the bulk of the time).
> CrocodileJS is a Node MVC framework that lets you chew apart JavaScript
Four words in and CrocodileJS is already lumped onto the 4/5ths heap of unusable frameworks (Node is an immediate non-starter in the same vein as MongoDB). Just to give you some perspective and perhaps help you see the irony.
I know that. I wouldn't touch node with a 10 foot pole (after past experiences) and since the parent comment's framework is integrated with Node it's a non-starter.
I doubt any competent developers are even considering Angular for new projects. Not just because of 1.x but because it's not a relevant technology anymore. Everywhere I look cyclic data is the way forward. Using Angular now is like using Backbone in 2014.
> but based on what you've said, especially "I don't fancy the new and shiny. I just get things done fast and done properly", would be enough for me to label you a senior programmer.
I think that's wild speculation. That phrase could mean a number of things.
It could mean they engage in the industry, explore new technologies and make educated decisions which balance the risks associated with adopting new technologies and, in some cases, choose to use technologies that are fit for purpose but are not necessarily bleeding edge.
On the other hand it could mean they have failed to keep up with technological advancements and are using the wrong tools for the job, tools that can't deliver a modern web experience. He might be churning out shocking legacy code that someone else will have to clean up one day. Perhaps his employers are none the wiser and don't realize a different developer could deliver a better quality product in a much shorter time using the tools available today.
Given OP's examples of not knowing "Amazon" or Java Spring, both of which are ancient technology in web years, I would speculate the OP might fall into the latter category. Another strong indicator of this: OP has been doing web development "as long as [they] can remember". You'd think his coworkers or employers would have told him he was a senior developer if he hadn't figured it out himself.
Overall, insufficient information in the OP to make an assessment, but I would be very wary of pandering to someone's ego as it can do more harm than good.
"done properly" means not taking the shortest, technical-debt-laden path to success (amongst other things). It's a hallmark of a senior, in my opinion.
Also, why would you know Amazon operations stuff if your job duties don't expose it to you? The OP was saying 'operate Amazon', not 'use Amazon APIs'.
Both AWS and Spring have changed and grown a lot over the years. Spring Boot is pretty much a reinvention of the Spring developer experience and Spring 5 will be pushing reactive programming into the limelight.
Disclosure: I work for Pivotal. So do many members of the core Spring team.
What's the point? This would lead to your API being comprised of blocks of HTML which are probably only useable for one product. Why not just use REST + JSON? It would take no more than five minutes to set up client-side rendering, and you could even make it attribute-based like this with barely any more effort. Is it really not worth spending the extra five minutes it takes to set things up in a way that is reusable and standard? All I see is piles of legacy code being generated where it hurts most - in the backend.
This took me 10 minutes to cook up. It would have taken about three if I hadn't forgotten the jQuery and Handlebars APIs. This allows you to POST to a JSON API using two attributes. Untested of course, but you get the idea: