• 1 Post
  • 18 Comments
Joined 1 year ago
cake
Cake day: September 7th, 2023

help-circle




  • Don’t quote me on that but I’m pretty sure it originally didn’t have that ending planned and it was changed later at some point during writing. I’m trying to find anything on that but I’m failing to find it. I’m sure I’ve read some interview mentioning that years ago though.

    E: Wait? I think I found something? https://www.animenewsnetwork.com/interest/2020-10-12/violet-evergarden-director-discusses-his-initial-hesitation-about-creating-sequel-film/.165111

    “In the TV series, whether Gilbert lives or dies isn’t shown, but even if Violet were to never meet him again, she would live on,” he said. “Personally speaking, that’s the entirety of the story I intended to tell. So when talks of a sequel came up, I actually said that there was nothing else I wanted to do. But when I read the plot that the scriptwriter Reiko Yoshida wrote, it was so believable that I was spurred to action. I came to think that it was fine for Gilbert to live. This was a little less than two years ago.”

    E2: But the novel finished years before that so I guess it was a set stone already. The anime supposedly changed lots compared to novel so maybe in novel it was smoother compared to anime that didn’t intend to include that ending.









  • Nope, a neural network:

    https://youtu.be/0Xn8xGV_w9w

    https://arxiv.org/abs/2408.14837 “Diffusion Models Are Real-Time Game Engines”

    https://gamengen.github.io/

    We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.