OpenCores
no use no use 1/1 no use no use
Hardware and OS integration and security
by Unknown on Mar 22, 2005
Not available!
I am interested in how current hardware designs constrain operating system design and how functions from both directions could better benefit the user if functions are moved from one medium to another. The goal is to provide a secure computing environment with the system owner in the position of making decisions and administrating the computer s/he has purchased. This is a consumer centric approach to security that achieves security and would compete with the current Trusted Computing Platform Alliance (TCPA) that seeks to mandate Fritz-chips on every computer and is supported by large companies who want to enforce DMCA and DRM for the movie and music industries. Obviously, this type of project has a large process component that requires schooling governements on the need to protect consumer rights and remind them that companies have to wait for crimes to be commited and address the crime in the courts system. It's unreasonable to treat all computer users as guilty and put their computer in "jail." Schooling businesses is also needed - we do keep them in business. A persons' home is her/his castle. An alternative technology archiecture is needed to counter TCPA. The goals of the alternative technology architecture is to present to business and government that consumers can have a trusted computer. The computer is trusted to not have viruses and will not allow the distribition of viruses. The computer can execute both commerical and Open Source software. The technology components are as follows: 1. Encrypted instruction sets - What is the overhead of translating instructions via a lookup table in the cache prior to execution of the instruction? If a processor had this capability every buffer overflow would get translated into garbage and not execute. The system owner could translate software based on the lookup table in the install process using tools that come with the operating system that executes in this environement. Disk space is cheap so a user could even use several encrypted versions of their favorite OS. Since no one knows the translation lookup table no viruses could execute on the computer. This would end the code and patch cycle. By the way this idea was patented in the 1970s but not for virus protection. Obviously, the computer owner must not install any viruses to protect her/his computer but other computers will not let it in the door or allow it to propagate across the network by using this approach. 2. A shadow stack could be built into the processor. Comparing the shadow stack to the final stack using the shadow stack on procedure return could determine if the stack was over run. The stack would be activated when RAM is installed in a special slot on the motherboard to prevent hacking some system parameter to turn it off. I haven't fould this idea in any papers on the Internet so far... 3. Multiple instruction set processors. Java virtual machines provide extra security by doing careful bounds checking on arrays, etc... Perhaps a processor could be built to use multiple instruction sets where additional checking can be done faster. 4. Better processor virtualization. The i386 uses a register to contain the physical address of the page tables while other processor designs allow virtual addresses to be used thus allowing the processor to virtualize itself. This allows for the building of better "sandboxes" to test software. "Open" is the right direction. Preserving Open access to the Internet is important. Preserving access to Open source is critical. TCPA, DMCA and DRM and politicians present a real threat today if you can believe the news. If this or similar projects are being worked on please point me in the right direction. Thanks, -ClaudeVMS -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.opencores.org/forums.cgi/cores/attachments/20050322/d9c9bbfb/attachment.htm
Hardware and OS integration and security
by RT on Mar 23, 2005
RT
Posts: 10
Joined: Sep 2, 2009
Last seen: Sep 27, 2011
What's your agenda? Academic? Commercial? Ideological? It helps to set
some context.

Try comp.arch for your technical points but, after a couple of minutes,
my reaction is:

Charlton Heston wrote:

The technology components are as follows:

1. Encrypted instruction sets - What is the overhead of translating
instructions via a lookup table


This is already done commercially for translating one instruction set to
another on the fly. Nazomi (?), at least, claims a "patent" on JVM
implementation this way. The immediate overhead is more silicon and,
possibly, an extra pipeline stage, with all the complications that brings.

in the cache prior to execution of the instruction? If a processor
had this capability every buffer overflow
would get translated into garbage and not execute.


How does it protect against buffer overflows? The overflow must contain
valid instructions or it would be of no use to the attacker.

The system owner
could translate software
based on the lookup table in the install process using tools that
come with the operating system
that executes in this environement. Disk space is cheap so a user
could even use several
encrypted versions of their favorite OS. Since no one knows the
translation lookup table no viruses
could execute on the computer. This would end the code and patch
cycle. By the way this idea was patented
in the 1970s but not for virus protection. Obviously, the computer
owner must not install any viruses
to protect her/his computer but other computers will not let it in
the door or allow it to propagate
across the network by using this approach.


I can't see that this gives you any protection. You have to get your
software from somebody else, unencrypted. You then encrypt it yourself
on your machine. The attacker simply sends you unencrypted software,
like everybody else, and you encrypt it for yourself. It doesn't even
address HTML and JVM-type attacks.

2. A shadow stack could be built into the processor. Comparing the
shadow stack to the final stack
using the shadow stack on procedure return could determine if the
stack was over run. The stack would be activated when RAM is
installed in a special slot on the motherboard to prevent hacking some
system
parameter to turn it off. I haven't fould this idea in any papers on the
Internet so far...


So how do you update the shadow stack? Why would the shadow stack be any
different from the 'real' stack? Your own trusted software can't tell
that there are 2 stacks any more than the attacker can.

3. Multiple instruction set processors. Java virtual machines provide
extra security by doing careful
bounds checking on arrays, etc... Perhaps a processor could be built
to use multiple instruction sets
where additional checking can be done faster.

4. Better processor virtualization. The i386 uses a register to contain
the physical address of the page tables
while other processor designs allow virtual addresses to be used
thus allowing the processor to virtualize itself.
This allows for the building of better "sandboxes" to test software.


Virtualisation is a major research area; you should be able to find any
number of papers on this. My local uni has a security research unit and
will let outsiders into seminars if they ask nicely; try yours.

Richard




Hardware and OS integration and security
by Unknown on Mar 23, 2005
Not available!
----- Original Message ----- From: "RT" mfoc73 at dsl.pipex.com> To: "Discussion list about free open source IP cores" cores at opencores.org> Sent: Wednesday, March 23, 2005 4:10 AM Subject: Re: [oc] Hardware and OS integration and security
What's your agenda? Academic? Commercial? Ideological? It helps to set
some context.


The true context for my interests is technical.
I'm sorry for the background in the other areas but it is interesting/scarry
what
some see as potential futures where Open Source won't work in a TCPA
universe legislated
by at least the U.S. Government.


Try comp.arch for your technical points but, after a couple of minutes,
my reaction is:

Charlton Heston wrote:

> The technology components are as follows:
>
> 1. Encrypted instruction sets - What is the overhead of translating
> instructions via a lookup table


This is already done commercially for translating one instruction set to
another on the fly. Nazomi (?), at least, claims a "patent" on JVM
implementation this way. The immediate overhead is more silicon and,
possibly, an extra pipeline stage, with all the complications that brings.

> in the cache prior to execution of the instruction? If a processor
> had this capability every buffer overflow
> would get translated into garbage and not execute.


How does it protect against buffer overflows? The overflow must contain
valid instructions or it would be of no use to the attacker.


From what I read and seen in the classroom a processor instruction is

(hopefully) fetched into
cache awaiting execution so there are few cache misses. The processor
fetches instructions from
the caches, decodes the instruction and then executes them. The encryption
step fits into this three step
process as follows:

1. Fetch the instruction in its encrypted form.
2. Decrypt the instruction.
3. Decode the instruction.
4. Execute the instruction.

All I'm suggesting is that the decryption is a simple lookup table (LUT). As
an example a NOP instruction
is represented as 0x80 in a machine. The LUT would contain 0x80 in position
0xFE for example. The entire
operating system and all applications would have been previously translated
using the LUT, which is essentially a key. Now when the encrypted program
image is fetched into the processor the additional LUT lookup occurs
which subsititutes 0xFE from the encrypted image to 0x80. The processor
understands 0x80 is a NOP instruction. The LUT would an a lookup for every
processor instruction.

Therefore, if a program receives "data" from another program over the
network that is really a buffer overflow
attack, the return address the attacker thinks he or she is putting on the
stack must get decrypted. The
attacker must know the LUT contents in order to launch a successful attack
where the contents of the
buffer gets decrypted correctly in order to execute. An email containing a
virus written in the normal machine code of the processor would go through
the decryption in the processor when it attempts to execute. The decryption
of the normal machine code would result in jibberish. As an example if the
0x80 (NOP) code
was looked up in the LUT perhaps the value would be 0x4D? Eventually the
processor hits an exception for the program (virus) and the operating system
would terminate the process. Process termination is betten than
virus propagation.




> The system owner
> could translate software
> based on the lookup table in the install process using tools that
> come with the operating system
> that executes in this environement. Disk space is cheap so a user
> could even use several
> encrypted versions of their favorite OS. Since no one knows the
> translation lookup table no viruses
> could execute on the computer. This would end the code and patch
> cycle. By the way this idea was patented
> in the 1970s but not for virus protection. Obviously, the computer
> owner must not install any viruses
> to protect her/his computer but other computers will not let it in
> the door or allow it to propagate
> across the network by using this approach.


I can't see that this gives you any protection. You have to get your
software from somebody else, unencrypted. You then encrypt it yourself
on your machine. The attacker simply sends you unencrypted software,
like everybody else, and you encrypt it for yourself. It doesn't even
address HTML and JVM-type attacks.

> 2. A shadow stack could be built into the processor. Comparing the
> shadow stack to the final stack
> using the shadow stack on procedure return could determine if the
> stack was over run. The stack would be activated when RAM is
> installed in a special slot on the motherboard to prevent hacking some
> system
> parameter to turn it off. I haven't fould this idea in any papers on the
> Internet so far...


So how do you update the shadow stack? Why would the shadow stack be any
different from the 'real' stack? Your own trusted software can't tell
that there are 2 stacks any more than the attacker can.


The shadow stack would be available to the processor exclusively where push
and pop
instructions would do parallel operations on the shadow stack. The physical
RAM for the shadow stack
could not be accessed by anyone or the normal memory management as it would
be on a separate
bus. This approach would be a hardware assisted bounds checking for programs
written in C that don't do bounds checking. This gets to the heart of my
interests - what
can be moved to hardware or software to reach a better security environment.
Also, if stack smashing is
detected can the correct stack be rebuilt so the program can continue
processing? Preventing the
process from terminating is better than having to restart a process.




> 3. Multiple instruction set processors. Java virtual machines provide
> extra security by doing careful
> bounds checking on arrays, etc... Perhaps a processor could be built
> to use multiple instruction sets
> where additional checking can be done faster.
>
> 4. Better processor virtualization. The i386 uses a register to contain
> the physical address of the page tables
> while other processor designs allow virtual addresses to be used
> thus allowing the processor to virtualize itself.
> This allows for the building of better "sandboxes" to test software.


Virtualisation is a major research area; you should be able to find any
number of papers on this. My local uni has a security research unit and
will let outsiders into seminars if they ask nicely; try yours.

Richard



Thank you very much for your feedback. I don't want to reinvent wheels or
have solutions in search of
problems.





_______________________________________________ http://www.opencores.org/mailman/listinfo/cores




Hardware and OS integration and security
by Unknown on Mar 27, 2005
Not available!
Le mercredi 23 Mars 2005 10:10, RT a écrit :
Charlton Heston wrote:
> in the cache prior to execution of the instruction? If a processor
> had this capability every buffer overflow
> would get translated into garbage and not execute.


How does it protect against buffer overflows? The overflow must contain
valid instructions or it would be of no use to the attacker.

> The system owner
> could translate software
> based on the lookup table in the install process using tools that
> come with the operating system
> that executes in this environement. Disk space is cheap so a user
> could even use several
> encrypted versions of their favorite OS. Since no one knows the
> translation lookup table no viruses
> could execute on the computer. This would end the code and patch
> cycle. By the way this idea was patented
> in the 1970s but not for virus protection. Obviously, the computer
> owner must not install any viruses
> to protect her/his computer but other computers will not let it in
> the door or allow it to propagate
> across the network by using this approach.


I can't see that this gives you any protection. You have to get your
software from somebody else, unencrypted. You then encrypt it yourself
on your machine. The attacker simply sends you unencrypted software,
like everybody else, and you encrypt it for yourself. It doesn't even
address HTML and JVM-type attacks.


You could have a different encryption scheme. The encryption could only be
done by the loader. Code injection can't work anymore. You could imagine a
software way of incription and a hardware way of decription.

That could even protect from code injection inside the "env" variable. But
return into libc still work.




Hardware and OS integration and security
by Unknown on Mar 27, 2005
Not available!
The papers I have read online about preventing code injection via instruction set encryption have taken the initial step of encryption of the image on load. This would leave a program image unencrypted on the filesystem. I was proposing
encryption of the OS and all applications so they execute encrypted where the LUT in the hardware would decrypt the instructions at the last possible moment and out of sight of users. The papers I read also presented information about encrypting interpreted languages (e.g. Perl) and found that encryption worked
in this environment too. This may extend to Java, etc...

As for return to libc I felt that the forced decryption of any buffer overflow
would result in garbage and forces the application to terminate without a return
to libc.



-------------- Original message --------------

Le mercredi 23 Mars 2005 10:10, RT a écrit :
> Charlton Heston wrote:
> in the cache prior to execution of the instruction? If a processor
> had this capability every buffer overflow
> would get translated into garbage and not execute.

>
> How does it protect against buffer overflows? The overflow must contain
> valid instructions or it would be of no use to the attacker.
>
> The system owner
> could translate software
> based on the lookup table in the install process using tools that
> come with the operating system
> that executes in this environement. Disk space is cheap so a user
> could even use several
> encrypted versions of their favorite OS. Since no one knows the
> translation lookup table no viruses
> could execute on the computer. This would end the code and patch
> cycle. By the way this idea was patented
> in the 1970s but not for virus protection. Obviously, the computer
> owner must not install any viruses
> to protect her/his computer but other computers will not let it in
> the door or allow it to propagate
> across the network by using this approach.

>
> I can't see that this gives you any protection. You have to get your
> software from somebody else, unencrypted. You then encrypt it yourself
> on your machine. The attacker simply sends you unencrypted software,
> like everybody else, and you encrypt it for yourself. It doesn't even
> address HTML and JVM-type attacks.
You could have a different encryption scheme. The encryption could only be done by the loader. Code injection can't work anymore. You could imagine a software way of incription and a hardware way of decription. That could even protect from code injection inside the "env" variable. But return into libc still work. _______________________________________________ http://www.opencores.org/mailman/listinfo/cores
-------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.opencores.org/forums.cgi/cores/attachments/20050327/3fcb8d4f/attachment.htm
Hardware and OS integration and security
by Unknown on Mar 27, 2005
Not available!
Le dimanche 27 Mars 2005 21:32, claudevms at comcast.net a écrit :
The papers I have read online about preventing code injection via
instruction set encryption have taken the initial step of encryption of the
image on load. This would leave a program image unencrypted on the
filesystem. I was proposing encryption of the OS and all applications so
they execute encrypted where the LUT in the hardware would decrypt the
instructions at the last possible moment and out of sight of users. The
papers I read also presented information about encrypting interpreted
languages (e.g. Perl) and found that encryption worked in this environment
too. This may extend to Java, etc...

As for return to libc I felt that the forced decryption of any buffer
overflow would result in garbage and forces the application to terminate
without a return to libc.


This could only be if you also encrypt data not only code (string and/or
pointer address).



no use no use 1/1 no use no use
© copyright 1999-2025 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.