One of the trickiest parts of MCP in practice has been auth propagation. As soon as the agent backend invokes the MCP server instead of the client, the original user’s auth context disappears—tools that require the user's session_id (or equivalent) suddenly only see a generic token. We ended up needing a pattern for:
- M2M-issued short-lived tokens for backend → MCP calls
- Per-request user metadata injection so tool calls can still act on behalf of the user
- Consistent OAuth2 / Okta validation so both layers trust each other
What’s happening here feels less like “Chinese models gaining share” and more like a substrate shift driven by cost physics. When inference drops from dollars to cents and quality converges to GPT-4-mini territory, the default stack for early-stage teams flips almost overnight. At that point founders optimize for runway, not sentiment, and open models become the path of least resistance.
The more interesting consequence is that when inference and fine-tuning are essentially free at startup scale, specialization becomes viable again. Instead of generic prompting against a closed API, teams can afford narrow, high-precision models tailored to their domain — something that used to be economically out of reach. Came across this interesting post - https://www.linkedin.com/feed/update/urn:li:activity:7396291...
I still cross-post occasionally with a canonical link and haven’t seen any negative SEO impact, but Medium doesn’t drive nearly as much traffic as it used to. It’s mostly useful for tapping into specific Medium publications or communities—otherwise the effort rarely pays off.
Conversions sound obvious, but most office buildings just weren’t designed for residential codes. NYC relaxing zoning and allowing more flexibility is probably the main reason it’s seeing so much activity compared to other cities.
Cyber Monday traffic is a different beast - it’s not just more load, it’s bursty, write-heavy, and tied to external systems. A small issue in auth or POS can ripple across the entire commerce stack.
Curious to see if Shopify publishes a postmortem; these incidents usually reveal interesting real-world bottlenecks.
It’s interesting how The Little Prince keeps resurfacing across generations. Even if someone doesn’t connect with every part of it, the themes loss, imagination, responsibility, friendship feel universal. It’s rare for something that short to stay relevant for so long.
Have been using HubSpot for CRM, supplemented with Pylon for CX. The combo has worked well for us because HubSpot handles the broader CRM workflows, while Pylon fills some of the gaps on the customer success side.
Pylon has been rolling out some interesting capabilities lately - Account Intelligence in particular has been useful for surfacing context automatically before a human jumps in. Still early, but it’s been helpful in reducing back-and-forth.
How does it approach keyword research? This is something that I have struggled with the most. Even if you have full access to tools such as semrush, search console, etc sifting thorough millions of data is cumbersome. Classic use case of a use case that machines can do better than humans.
Sam Altman has shifted his stance from calling ads a "last resort" and "uniquely unsettling" as recently as last May to saying he "finds ads somewhat distasteful but not a nonstarter" in a podcast this month, while CEO of applications Fidji Simo said in a companywide meeting in recent weeks that OpenAI was looking at advertising and how it could benefit users according to two current employees
- M2M-issued short-lived tokens for backend → MCP calls
- Per-request user metadata injection so tool calls can still act on behalf of the user
- Consistent OAuth2 / Okta validation so both layers trust each other
Was looking for this standarization.