Hacker Newsnew | past | comments | ask | show | jobs | submit | hwers's commentslogin

I’ve moved to more closed source projects for this reason (just for the fun of coding rather than sharing). Though I suspect they still use private github repos in their deals to microsoft


If you're not sharing the code then what's the benefit of GitHub over self hosted?


Not having to setup and maintain your own self hosted platform. And it also makes it easier to share with specific individuals if you want to.


Free backups


At this point, just use Codeberg and send them a few bucks a month if you want to support them. Fuck GitHub.


I was in management i probably also wouldn’t like my designers to use AI. I pay them good money to draw original pieces and everyone can tell and it looks generic when AI is used. I’d want my moneys worth


Not to mention that it could be a legal minefield if your designer uses AI.


Some providers offer indemnification: https://openai.com/policies/services-agreement/#:~:text=13.%...

The other problem is that AI-generated material does itself not enjoy copyright protection.


Well, you can definitely make AI art much less obvious with the right tweaking (directly running models, blending different sub-models, etc). The bigger issues from a professional perspective are liability concerns and then, even if you have guaranteed licensed sources, the impossibility of controlling fine details. For a company like GW it's kind of pointless if it can't reflect all the decades worth of odds and ends they've built both the game and the surrounding franchise around.


> everyone can tell and it looks generic when AI is used

For now. I feel like the gap continues to close with each release, and it's only a matter of time before it becomes indistinguishable.


For generation of 2D images, it seems, if anything, to be getting worse/more-obviously-AI.


I don’t know what you’re looking at, perhaps things like deepfakes are getting, but most of the graphic design done with AI that I see around looks like shit.


alternatively, you can use AI as a starting point. All design is iterative.


But why? It's not all that much harder to do original work than redrawing the output of an AI.


I'm not an artist, so I won't comment on the "ease", but if it were true, then why would a ban would be required?

Regardless, you should check out the AI features in the adobe products [1]. Generative removal, fill, etc [2].

AI, in modern tools, is not just "draw the scene so I can trace it".

[1] https://www.adobe.com/ai/overview/features.html


Sure and limit yourself at the starting point, people underestimate how much limiting these tools are, they're trained a on a fixed set can only reproduce noise from here and there


> they're trained a on a fixed set can only reproduce noise from here and there

This anti-AI argument doesn't make sense, it's like saying it's impossible to reinvent multiplication based on reading a times table. You can create new things via generalization or in-context learning (references).

In practice many image generation models aren't that powerful, but Gemini's is.

If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.


If image generation is anything like code generation then AI is not good at copying layout / art style of the coder / artist.

Using Visual Studio, all the AI code generation is applying Microsoft's syntax style and not my syntax style. The return code line might be true but the layout / art / syntax is completely off. This with a solution that has a little less than one million lines of code, at the moment, which AI can work off of.

Art is not constant. The artist has a flow and may have an idea but the art will change form with each stroke with even removing strokes that are not fitting. I see as AI generated content lacks emotion from the artist.


Image generation is nothing like AI code generation in this regard. Copying artist style is one of the things that is explicitly quite easy to do for open-weight models. Go to civitai and there are a million LORAs trained specifically on recreating artist style. Earlier on in the Stable Diffusion days it even got fairly meanspirited - someone would make a lora for an artist (or there would be enough in the training data for the base model to not need it) and an artist would complain about people using it to copy their style, and then there would be an influx of people making more and better LORAs for that artist. Sam Yang put out what was initially a relatively tame tweet complaining about it, and people instantly started trying to train them just to replicate his style even more closely.


Note, the original artist whose style Stable Diffusion was supposedly copying (Greg someone, a "concept art matte painting" artist) was in fact never in the training data.

Style is in the eye of the beholder and it seems that the text encoder just interpreted his name closely enough for it to seemingly work.


Greg Rutkowski

Early stable diffusion prompting was a lot of cargo cult copy pasting random crap in as part of every prompt.


Putting it in the context of an anti-AI argument doesn't make sense. AI was everywhere, like in photoshop brushes, way before it became a general buzzword for LLMs or image generation. I'm not anti-AI but that it can come up with a limit set based on its training data it simply is the truth. Sure one can get inspiration from a "times table" but if you only see 8s and 9s multiplied you're limiting yourself


> If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.

This reminds me, if you ask most image models for something "with a transparent background", it'll generate an image on top of a Photoshop checkerboard, and sometimes it'll draw the checkerboard wrong.


I've seen plenty of artists start by painting over an image they got from google image search, and end with something incredible.

And it's not that limiting. You aren't stuck with anything you start with. You can keep painting.


And then you decide that the “starting point” is good enough because the deadline is looming.


Has it occurred to you if this tooling was good, you wouldn't need to encourage creatives so hard to use it?


If that Disney Star Wars creature AIslop video was any indication, that starting point is pretty fucking bad.

https://youtu.be/E3Yo7PULlPs?t=668


For an artist, the starting point is blank page, followed by a blur of erased initial sketches/strokes. And, sources of inspiration are still a useful thing.


Starting from something basically done might have the same effect as spec music has done for movies.


You’re perfectly free to use it for private use, model output have been deemed public domain


Or you're free to use the output for commercial use if you can get someone else to use the tool to make the (uncopyrighted) output you want.


Isn't that what groq did basically?

Though I'm sure they will shut their shop asap now that Nvidia basically bought them.


Nvidia didn’t buy Groq.


They did (unless you're one of the drafters of the Hart-Scott-Rodino Act, in which case, weirdly, they didn't)


Given that it's under scrutiny for regulatory bypass, it's not a purchase and is being reviewed as circumventing those very rules. Might not even happen.

I know, I'm joking: Trump likes Nvidia, but maybe he'll bump the Chinese tax to 30% to approve this deal? In a way I hope he pulls something like that, to punish Huang for his boot shining manipulations.

#iwantRAM


"basically"


68% felt like they were being watched but didn’t feel safe to admit so because they didn’t trust the report were truly anonymous


This seems wildly optimistic to me. We see the same complacency and/or unawareness with e.g. Flock in society - the truth is most people really just don't think about it, or even mind when they do.


If price per compute keeps going down we effectively keep having moore’s law for parallel compute like GPUs. (Just get better at making ML models that don’t have as big of a communication bottleneck.)


Sounds to me like runway released it without consulting stability, called it “1.5” - which according to the license they’re allowed to do but pretty scammy since emad had hyped a model with that label. And now stability is deciding to call this the official release to be nice to runway and avoid a general PR thing and community infight.


To me it seems the other way around.

1.5 was apparently held back by Stability for weeks. Runway finally decided to just release it.

Stability requested a take down, and here you see the Runway CEO telling Stability in no uncertain terms "this ours to release, we created it, it's under an OS license, you don't hold any IP rights here; all you did was provide compute"

If anything this is a pretty stern rebuke of Stability and a sign of considerable disagreement between the two parties.


Well if that’s the case that’s still a pretty shitty thing to do on runways part. Just be curteous to what stability’s needs are, keep good business relations. Weird behaviour and I wouldn’t be surprised if in the future runway are silently excluded from before-public releases (which seems to be many in the years ahead).


Doesn't the "OS license" mean that Runway has permission to release it already? Ack that there might be other agreements and business relations involved though.


Release it, fine. There’s been lots of fine tuned and continued trained SD models. Just don’t call it “1.5” which is the specific label for the model stability is training internally. Again the license ‘permits’ them to do it but seems like a very bad business decision since runway given what their service does would likely benefit hugely from early access to eg stability’s future text2video models (etc), which they now likely won’t get until everyone else (leaving someone else to possibly take market share in their field, and if this is the trade - because they got ‘impatient’ - that seems awfully not smart).


It seems very weird to me that “stability” is building a company around something they didn’t create.


Seems to me that stability AI did a pretty shitty thing and runway ran out of patience. Pretty weird to paint runway as the bad guys here.


didn't runway create the model? How could stability exclude them?


Seems really useful in a wasm context


This is google, they for sure aren’t releasing the weights


They release a lot of weights open source, including T5 (the underlying model they used in this work). They also indicated their intent here: https://twitter.com/aleksandrafaust/status/15799326368934420....



I’m seeing a lot of R&D solely focused on giving them the chance to extract rent through future silly things like buying artificially scarce houses in fbs metaverse. I’m barely seeing any “giving back” type research like AT&T created with their research in the internet.


Their end-game is very much obvious, but honestly they are so bad at it... I doubt they will manage to actually build anything successful to extract any value out of it. The reason why Facebook Metaverse is so cringey is because they have no clue how to make games or anything like it. The "real metaverse" already exist in Roblox, Fortnite and few other popular games that every teenager socialize in.

At the same time thanks to Facebook any average Joe can easily get consumer-ready VR hardware for around $400. We just really dont have many companies behind VR except for Valve and they're simply dont have enough manpower to create mass-market hardware.

My point is that we must appreciate those engineers who persuaded Zuck to spend money on VR hardware and research. Yeah their attempt at "metaverse" is laughtable, but investment into hardware is priceless.


It looks like their AR/VR publication list is here: https://research.facebook.com/publications/research-areas/au...

For publications in general: https://research.facebook.com/publications/


> I’m seeing a lot of R&D solely focused on giving them the chance to extract rent through future silly things like buying artificially scarce houses in fbs metaverse

An example from FB please?


Meta does plenty of FOSS work as far as giving back in concerned. But artificially scarce houses? How about a source for that.


hwers's comment lacks some context, but there is some truth to it. Consider Carmack's comment: https://www.youtube.com/watch?v=BnSUk0je6oo @33:50 onward "A closed platform doesn't deserve to be called a metaverse, or does it?"

Making money in the Metaverse is a pillar of what makes a virtual world an actual world. (Ignoring of who does the money making for now) Scarcity is an integral part of it. Skins, models, rare items. In the virtual world it all scarcity is artificial by definition. Just like with second life and the linden dollar making it to real exchanges. Second life sold worlds as some kind of virtual realtor in the past, VRChat's users sell custom created skins and rigged models today and Meta wants to be the platform facilitating that tomorrow, but on a grand scale.

I also highly applaud Meta's contribution to FOSS btw.


This is extremely well written


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: