aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/docs/SledjHamr.html
diff options
context:
space:
mode:
Diffstat (limited to 'docs/SledjHamr.html')
-rw-r--r--docs/SledjHamr.html4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/SledjHamr.html b/docs/SledjHamr.html
index a138287..b2371f1 100644
--- a/docs/SledjHamr.html
+++ b/docs/SledjHamr.html
@@ -30,7 +30,7 @@
30<h3> in world asset server </h3> 30<h3> in world asset server </h3>
31<p>Unpack an OAR into a web server. An OAR file is supposed to be a complete copy of everything in the sim, including meta data like access lists. In reality, they do seem to miss prim description and prim sit position data at least. We can always add what is missing. Existing OpenSim versions can create these OAR files from the console. My experience with meta 7 is that A) OAR file creation can be automated. B) OAR file creation is not a noticeable load on a busy server. I say these things since I had written a simple script to make OAR file backups of every meta 7 sim four times a day. No one noticed any extra lag.</p> 31<p>Unpack an OAR into a web server. An OAR file is supposed to be a complete copy of everything in the sim, including meta data like access lists. In reality, they do seem to miss prim description and prim sit position data at least. We can always add what is missing. Existing OpenSim versions can create these OAR files from the console. My experience with meta 7 is that A) OAR file creation can be automated. B) OAR file creation is not a noticeable load on a busy server. I say these things since I had written a simple script to make OAR file backups of every meta 7 sim four times a day. No one noticed any extra lag.</p>
32<p>The contents of an OAR file are basically a bunch of XML files to describe the objects in the sim, and the actual assets used to make up those objects. The example OAR files I looked at followed the 80/20 rule fairly well, the assets being 80% of the data, the descriptions being 20%. There are also a few tiny XML files containing the other sim meta data, and one file containing raw terrain data. The assets are just ordinary files, jpeg200, ogg, bvh, lsl, etc. Though the notecard files have some extra header info at the beginning for some odd reason. One thing that is absent is a central description of what goes where. Each objects XML description includes it's position info though.</p> 32<p>The contents of an OAR file are basically a bunch of XML files to describe the objects in the sim, and the actual assets used to make up those objects. The example OAR files I looked at followed the 80/20 rule fairly well, the assets being 80% of the data, the descriptions being 20%. There are also a few tiny XML files containing the other sim meta data, and one file containing raw terrain data. The assets are just ordinary files, jpeg200, ogg, bvh, lsl, etc. Though the notecard files have some extra header info at the beginning for some odd reason. One thing that is absent is a central description of what goes where. Each objects XML description includes it's position info though.</p>
33<p>When a user arrives at a sim, they could start by downloading those object description XML files, and the terrain file. Those XML files could compress well 20:1 if all done at once as a tarball. The terrain file is more 2:1 if compressed. This is gz compression, which is supported by typical web servers and web browsers on the fly. Once the user has downloaded the description XML files, they can start to rez things within their draw distance, requesting the textures from the assets part of the unpacked OAR file as needed. Rezzing terrain would be - download the terrain raw data (giving the terrain shape), download the sims settings XML file (giving the texture UUIDs and parameters for the terrain textures), dowload the four terrain textures, then generate the terrain randomly as usual. Note that on first arrival, generally only texture assets would be downloaded. Things like sounds and animations would only need to be downloaded when the sims scripts say they are needed. Notecards and scripts embedded in objects would only need to be downloaded when an authorised user opens them for editing.</p> 33<p>When a user arrives at a sim, they could start by downloading those object description XML files, and the terrain file. Those XML files could compress well 20:1 if all done at once as a tarball. The terrain file is more 2:1 if compressed. This is gz compression, which is supported by typical web servers and web browsers on the fly. Once the user has downloaded the description XML files, they can start to rez things within their draw distance, requesting the textures from the assets part of the unpacked OAR file as needed. Rezzing terrain would be - download the terrain raw data (giving the terrain shape), download the sims settings XML file (giving the texture UUIDs and parameters for the terrain textures), download the four terrain textures, then generate the terrain randomly as usual. Note that on first arrival, generally only texture assets would be downloaded. Things like sounds and animations would only need to be downloaded when the sims scripts say they are needed. Notecards and scripts embedded in objects would only need to be downloaded when an authorised user opens them for editing.</p>
34<p>The alternative that I mentioned above, would be to start with some central description of the sim. Basically the equivalent of a web page, index.omg maybe. This would be a text (or binary) file that only includes the list of object names, their X,Y,Z location, and rotation. Perhaps some small amount of meta data to, like the terrain meta data mentioned above. Then the user would only have to download the object description XML (or binary command list) files for those objects within their draw distance. This is a good idea I think. It maps well to the web that everyone is familiar with, and allows a lot of flexibility. For example, this file and all other data could be generated on the fly by PHP code. Typical CMS systems could be used to manage this content just like all the other web content they manage. This means we just supply protocol and data structures, plus a simple implementation, then step out of the way and let people implement their own variations using tools they are long familiar with. An IETF RFC could be in the works.</p> 34<p>The alternative that I mentioned above, would be to start with some central description of the sim. Basically the equivalent of a web page, index.omg maybe. This would be a text (or binary) file that only includes the list of object names, their X,Y,Z location, and rotation. Perhaps some small amount of meta data to, like the terrain meta data mentioned above. Then the user would only have to download the object description XML (or binary command list) files for those objects within their draw distance. This is a good idea I think. It maps well to the web that everyone is familiar with, and allows a lot of flexibility. For example, this file and all other data could be generated on the fly by PHP code. Typical CMS systems could be used to manage this content just like all the other web content they manage. This means we just supply protocol and data structures, plus a simple implementation, then step out of the way and let people implement their own variations using tools they are long familiar with. An IETF RFC could be in the works.</p>
35<p>All of this leaves out the LL inspired DRM system. Our objective to smash the garden walls means that such a system is of no value, there is no central server to control that. On the other hand, we replace it with the mature and well understood web site content protection systems already in existence. In particular, the only real way on the web to protect IP is to keep it on the server, once the client has it, they can copy things no matter what you do. This already applies to all web content. The only thing you can protect from copying is the web back end code, coz you never transmit that to clients. The same level of protection is thus now given to LSL code. Unless it's being edited, it's never sent across the 'net, only executed by servers. When unpacking the OARs, we can leave out the LSL files.</p> 35<p>All of this leaves out the LL inspired DRM system. Our objective to smash the garden walls means that such a system is of no value, there is no central server to control that. On the other hand, we replace it with the mature and well understood web site content protection systems already in existence. In particular, the only real way on the web to protect IP is to keep it on the server, once the client has it, they can copy things no matter what you do. This already applies to all web content. The only thing you can protect from copying is the web back end code, coz you never transmit that to clients. The same level of protection is thus now given to LSL code. Unless it's being edited, it's never sent across the 'net, only executed by servers. When unpacking the OARs, we can leave out the LSL files.</p>
36<p>To get there from here, we can start simple. Server side really is just a matter of creating OARs and unpacking them into a web server. The web server can be very simple (I've written a suitable one in one page of Java many years ago). At this stage just dealing with static data, with a last modified header support, would be a great start for experimentation. Client code can be modified to check the web server first, then fall back to the usual methods. Simple things we can do to start with, adding the more interesting stuff later. To start with, OpenSim would still be dealing with this stuff, the first step is to run side by side, so we have a small first step, and can use OpenSim as a crutch. The next step might be to generate this central description index.omg file from the existing data in the OAR, then the client can check for that file first. Meshes are new to the old systems, and I already am working on code for those, they will likely be the first things served via an index.omg file.</p> 36<p>To get there from here, we can start simple. Server side really is just a matter of creating OARs and unpacking them into a web server. The web server can be very simple (I've written a suitable one in one page of Java many years ago). At this stage just dealing with static data, with a last modified header support, would be a great start for experimentation. Client code can be modified to check the web server first, then fall back to the usual methods. Simple things we can do to start with, adding the more interesting stuff later. To start with, OpenSim would still be dealing with this stuff, the first step is to run side by side, so we have a small first step, and can use OpenSim as a crutch. The next step might be to generate this central description index.omg file from the existing data in the OAR, then the client can check for that file first. Meshes are new to the old systems, and I already am working on code for those, they will likely be the first things served via an index.omg file.</p>
@@ -129,7 +129,7 @@
129<p>People can run their own copy of a site, and invite others, to take the load off the server. Still talking to the server for server resident objects. These can be private, similar to layers and dimensions in other virtual worlds.</p> 129<p>People can run their own copy of a site, and invite others, to take the load off the server. Still talking to the server for server resident objects. These can be private, similar to layers and dimensions in other virtual worlds.</p>
130<p>There is a way for some 3D tools to send updates in real time to other tools, we should definitely support that.</p> 130<p>There is a way for some 3D tools to send updates in real time to other tools, we should definitely support that.</p>
131<p>People could have a little movable sim of their own running from their computer. Like the insides of a car or boat, moving around other peoples sims.</p> 131<p>People could have a little movable sim of their own running from their computer. Like the insides of a car or boat, moving around other peoples sims.</p>
132<p>Touch mode - click on something to touch it, left button for touching with left hand, right button for right hand, dail up the strength of the touch with the mouse wheel. A mouse drag will naturally drag, push, or pull something, with force set by the mouse wheel, and at the speed that the user drags the mouse. Extend this so that any input device can be used to control any limb, digit, or other joint. Professional puppeteers may have open protocols for this sort of thing.</p> 132<p>Touch mode - click on something to touch it, left button for touching with left hand, right button for right hand, dial up the strength of the touch with the mouse wheel. A mouse drag will naturally drag, push, or pull something, with force set by the mouse wheel, and at the speed that the user drags the mouse. Extend this so that any input device can be used to control any limb, digit, or other joint. Professional puppeteers may have open protocols for this sort of thing.</p>
133<p><br /> This work is licensed under a <a href="http://creativecommons.org/licenses/by-sa/3.0/">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>.</p> 133<p><br /> This work is licensed under a <a href="http://creativecommons.org/licenses/by-sa/3.0/">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>.</p>
134</body> 134</body>
135</html> 135</html>