That makes sense. That “show everything at once” approach probably reduces some of the back-and-forth that hypermedia workflows rely on.
It’s interesting that some models can infer structure from hypermedia more easily. That seems like another place where semantic structure ends up helping both humans and machines interpret an interface. NICE!
It’s interesting that some models can infer structure from hypermedia more easily. That seems like another place where semantic structure ends up helping both humans and machines interpret an interface. NICE!