New type of computer
Want to say something off topic? Something that has nothing to do with Trek? Post it here.
posted on March 25th, 2009, 11:01 am
Last edited by runaway on March 25th, 2009, 11:09 am, edited 1 time in total.
Dircome wrote:a 0 is the same as null if i am not mistaken
no 0 is not equal NULL or NIL
0 describes a concrete Value
NULL or NIL is a missing Value, cause no Value is existing or the Value is unknown

but if i'm right NULL is a Value too although it describes the absence of a Value :S
theoretical informatics is such a crap...

posted on March 25th, 2009, 11:22 am
but how you you intemperate a number that isnt there then??? i guess this question is directed more at crazy moose.
posted on March 25th, 2009, 2:56 pm
To be honest I was half asleep in a 9am embedded systems lecture and really only half remember. I've just wikipediaed trinary logic and apparently it's 0 1 and 2 so looks like I was mistaken, if anybody knows more I'd love to hear about it.
posted on March 25th, 2009, 3:15 pm

posted on March 25th, 2009, 3:19 pm
yeah but lots of stuff uses hexadecimal. also hexadecimal is still binary system.
posted on March 25th, 2009, 8:07 pm
Thought I answered your question already... Quantum computing doesn't use binary logic.
posted on March 25th, 2009, 8:50 pm

Is this gonna be a serious thread abount quantum computing? ^^
posted on March 25th, 2009, 9:58 pm
my last post was directed at megaman. it does answer my question on other ways to increase computing speed. but if it doesnt really answer my question about using light, unless i just dont understand.
Idk but we had the one on shield mechanics and warp drive so why not
mimesot wrote:
Is this gonna be a serious thread abount quantum computing? ^^
Idk but we had the one on shield mechanics and warp drive so why not
posted on March 25th, 2009, 11:39 pm
Last edited by mimesot on March 25th, 2009, 11:45 pm, edited 1 time in total.
First of all, in most programming languages, NULL or NIL is implemented as a pointer / reference to a predefined adress in memory space.
If you try to call a variable of non primitive data type, e.g. a structure or an object, you will not get a value, but an adress, covered by a reference or a pointer. If this reference was only deklared but not initialized (in other words: if you have stated a non primitive data-object but never assigned a value to it), the system will return a default reference: NULL. This will throw an exception and your program will have to react or be aborted.
The numerical value of the adress, that covers the NULL may depend on the system, thus it doesn't need to be zero. Your compiler will know.
About hexadecimal, trinary systems and so on: Binary, trinary and hexadecimal are systems for calculation as well as the commonly known decimal system. In former times there were many systems (Germanic 12, Maya 20) in use. Computer, that work on altering the voltage of nodes to descrete states, were believed to work on different systems than only binary. And indeed it is possible to construct such machines, but it is very inefficient to do so. The implementation is rather difficult.
An alternative way to achieve higher values is to combine several parallel binary entities to one value. For example, when you combine 4 bits to a bus, you can encode the numbers up to 16 (15 and 0 to be correct). I'd call this a hecadecimal bus, but I don't think that this was sufficient for a Nintendo (I may be wrong, colors would be difficult then). I rather believe Nintendo uses a 16bit bus-system.
The speed of a computer doesn't directly depend on the numbers of states you can enconde with one entity, but on the speed, with which the states are discerned. In a (non Quamtomcomputer) light-processor, it may be possible to utilize different frequencies of light, but I can't tell you about the effiziency, because i don't know any serious implementation. The speed of a computer mostly depends on the density of logical gates, because electrical impulses, as well as all other information is passed on with light speed, the smaller the size, the shorter the takt rate.
Problems with the classical architecture is, that information is distroyed (e.g. AND-gate) -> entropy rises -> heat is dissipated. Secondly there are quantum-effects, that will perturbate the classical processes. So there will be an absolute upper limit for the speed of computers of classical architecture.
I won't say much about Q-computers yet, but: A quantum computer implements a completly different logic, though a binary logic can be emulated easily. Except of some special applications (e.g. Travelling salesman or prime factorization), the quantum-computer's power isn't even accessible, because only orthogonal states can read out accuratly. Thus it just works with 2bit operations at once and is not too much faster than a normal computer for regular applications. Theoratically it could be build on really, small scale, and has not the problem of dissipating energy, but unfortunatly the phenomenom of decoherence limits it's capability a lot.
PS: I think a heisenberg-kompensator could handle that problem too *gg*
If you try to call a variable of non primitive data type, e.g. a structure or an object, you will not get a value, but an adress, covered by a reference or a pointer. If this reference was only deklared but not initialized (in other words: if you have stated a non primitive data-object but never assigned a value to it), the system will return a default reference: NULL. This will throw an exception and your program will have to react or be aborted.
The numerical value of the adress, that covers the NULL may depend on the system, thus it doesn't need to be zero. Your compiler will know.

About hexadecimal, trinary systems and so on: Binary, trinary and hexadecimal are systems for calculation as well as the commonly known decimal system. In former times there were many systems (Germanic 12, Maya 20) in use. Computer, that work on altering the voltage of nodes to descrete states, were believed to work on different systems than only binary. And indeed it is possible to construct such machines, but it is very inefficient to do so. The implementation is rather difficult.
An alternative way to achieve higher values is to combine several parallel binary entities to one value. For example, when you combine 4 bits to a bus, you can encode the numbers up to 16 (15 and 0 to be correct). I'd call this a hecadecimal bus, but I don't think that this was sufficient for a Nintendo (I may be wrong, colors would be difficult then). I rather believe Nintendo uses a 16bit bus-system.
The speed of a computer doesn't directly depend on the numbers of states you can enconde with one entity, but on the speed, with which the states are discerned. In a (non Quamtomcomputer) light-processor, it may be possible to utilize different frequencies of light, but I can't tell you about the effiziency, because i don't know any serious implementation. The speed of a computer mostly depends on the density of logical gates, because electrical impulses, as well as all other information is passed on with light speed, the smaller the size, the shorter the takt rate.
Problems with the classical architecture is, that information is distroyed (e.g. AND-gate) -> entropy rises -> heat is dissipated. Secondly there are quantum-effects, that will perturbate the classical processes. So there will be an absolute upper limit for the speed of computers of classical architecture.
I won't say much about Q-computers yet, but: A quantum computer implements a completly different logic, though a binary logic can be emulated easily. Except of some special applications (e.g. Travelling salesman or prime factorization), the quantum-computer's power isn't even accessible, because only orthogonal states can read out accuratly. Thus it just works with 2bit operations at once and is not too much faster than a normal computer for regular applications. Theoratically it could be build on really, small scale, and has not the problem of dissipating energy, but unfortunatly the phenomenom of decoherence limits it's capability a lot.

PS: I think a heisenberg-kompensator could handle that problem too *gg*
posted on March 26th, 2009, 7:24 pm
Dominus_Noctis and Mime hit it on the nose. Currently, at this time, we can compute using light instead of circuits. You're right in it being faster, but it isn't efficient (i.e. cheap or small). The next generation of computing will more than likely be something organic based (i.e. dna) as some of you have hinted at. And to answer the initial question, it will be able to efficiently process (not including human error of course) seperate values for 0, 1, and 2. I believe they did this in experiments in the mid-90's.
posted on March 27th, 2009, 12:40 pm
Borg101 wrote:You're right in it being faster, but it isn't efficient (i.e. cheap or small).
As far as I know neihther of the statements can be verified up to today. What is your reason to believe it to be faster? What's the one, that causes it to be less efficient? As far as I know, there is no proper design yet do even deside these questions for one architecture, but you might tell us about the one you know.
The upper speed limit is given through the reaction time of the AND-gate, which is dependend on the molecule / structure, that represents it, and of the size.
Borg101 wrote:The next generation of computing will more than likely be something organic based (i.e. dna) as some of you have hinted at. And to answer the initial question, it will be able to efficiently process (not including human error of course) seperate values for 0, 1, and 2. I believe they did this in experiments in the mid-90's.
Using DNA you have 2 pairs of bases, this makes 4 possibilities per logical entity. So if you refer to DNA you have 0,1,2,3 as possible values to be correct. There may be arbitrary numbers of coded in one entity, when you take different designs.
In my opinion the organic structures are possible effizient but not fast. I beliebe the later, because they rely on chemical processes (at least if you refer to processes similar to living bodies, like DNA-processing), whose speed mostly depend on diffusion processes, which are not of light speed in contrast to the behaviour of electric potentials as well as EM-waves. I'm very uncertain about the effiziency of those systems. I've got concerns about heat dissipation, low gate density, quantum effects. So we have to be rather careful about judging these things.
posted on March 27th, 2009, 1:49 pm
The reason I believe it to faster, keep in mind we're talking hypothetically here, is that light travels faster than electricity. And as far as it not being cheap or small, we have circuits that use light already.......lasers specifically.....and they're not small or cheap setups by any means.
As far as speed goes for DNA based computing goes, I don't know the specifics, only that its been used for some supposedly advanced algorithms. That's what I was talking about being in the mid 90's. I also remember reading somewhere that DNA based computing would allow for computers equivalent to todays largest/most powerful super computers to be extremely small.
This is all as far as I've read/been taught....by no means am I an expert in computing. I'm only a 3rd year computer science student.
As far as speed goes for DNA based computing goes, I don't know the specifics, only that its been used for some supposedly advanced algorithms. That's what I was talking about being in the mid 90's. I also remember reading somewhere that DNA based computing would allow for computers equivalent to todays largest/most powerful super computers to be extremely small.
This is all as far as I've read/been taught....by no means am I an expert in computing. I'm only a 3rd year computer science student.
posted on March 27th, 2009, 4:33 pm
Ok, then you've chosen a false reason. For giving forward information you do not need to move an electron from one one place to an other. Only the impulse, has to be passed on. This is done with light speed!!!
Reason: An electric impulse is, when you apply a force to the electrons in a conductor. This can be done, when you locally alter the electron density or suddenly apply an electric field. In other wise you alter the voltage. The impulse (e.g. when you rise the voltage) locally changes the density of electrons, what increases the electric fild around that area. That field gives impulse on to electrons in the vicinity, causing the density to rise there, and the density at the origin to decrease. This goes on and on. By that the impulse travels through the conductor.
To give you a picture, think of a pond where you throw in a stone. Waves emerge. Where they are high, you have more water (higher electron-density, higher voltage), thus more pressure. The water woll flow to the places where less water is placed, forced there by the water pressure. The Force is the analogon for the electric field. But there is resistence by the resident water, and the incoming water has to break, piles up with the resident water and alters the water level there. This process goes on continually, and so the wave travels away from where you threw in the stone. Note that the water itself doesn't move very far, because it just bumps the water next to it. To be exactly the water doen't really need to move anywhere else than up and down, because the "next water" is in the instant surrounding.
Alternativly think of a water-pipe: When you turn it on, and the pipe is not empty (like a conductor is never empty of electrons), the water comes out on the other hand most instantly (delayed with speed of sound, to be correct). The it is not needed for the water at one end to wait for that water you just filled into the pipe on the other end.
Convinced?
Reason: An electric impulse is, when you apply a force to the electrons in a conductor. This can be done, when you locally alter the electron density or suddenly apply an electric field. In other wise you alter the voltage. The impulse (e.g. when you rise the voltage) locally changes the density of electrons, what increases the electric fild around that area. That field gives impulse on to electrons in the vicinity, causing the density to rise there, and the density at the origin to decrease. This goes on and on. By that the impulse travels through the conductor.
To give you a picture, think of a pond where you throw in a stone. Waves emerge. Where they are high, you have more water (higher electron-density, higher voltage), thus more pressure. The water woll flow to the places where less water is placed, forced there by the water pressure. The Force is the analogon for the electric field. But there is resistence by the resident water, and the incoming water has to break, piles up with the resident water and alters the water level there. This process goes on continually, and so the wave travels away from where you threw in the stone. Note that the water itself doesn't move very far, because it just bumps the water next to it. To be exactly the water doen't really need to move anywhere else than up and down, because the "next water" is in the instant surrounding.
Alternativly think of a water-pipe: When you turn it on, and the pipe is not empty (like a conductor is never empty of electrons), the water comes out on the other hand most instantly (delayed with speed of sound, to be correct). The it is not needed for the water at one end to wait for that water you just filled into the pipe on the other end.
Convinced?
posted on March 27th, 2009, 4:50 pm
The size of such a DNA computer is indeed very interesting.
Some thoughts: An intel pentium 4 processor uses 65nm technology. An organic (I mean oxygen and so on) atom has about 0,1nm radius, this means a todays electrical logical gate is just of 650 atoms in diameter. Organic molecules used for manipulation of a DNA are proteins. Those have around some to some thousand kDa, which is quite equivalent to the number of atoms. Thus, in a very very very rough approximation, they have about 10nm in diameter. The data density within the DNA is even higher (I estimate roughly 10bit per nm DNA). So it's realistic to say, that todays computers are bigger in their operating units and much bigger in their storage units than hypothetical organic ones.
Unfortunatly chemical reactions on that scale depend on propabilistic effects and are thus rather slow, to achieve confident results. I'm excited wht tricks the scientists will bring on, to ovecome that problem.
Some thoughts: An intel pentium 4 processor uses 65nm technology. An organic (I mean oxygen and so on) atom has about 0,1nm radius, this means a todays electrical logical gate is just of 650 atoms in diameter. Organic molecules used for manipulation of a DNA are proteins. Those have around some to some thousand kDa, which is quite equivalent to the number of atoms. Thus, in a very very very rough approximation, they have about 10nm in diameter. The data density within the DNA is even higher (I estimate roughly 10bit per nm DNA). So it's realistic to say, that todays computers are bigger in their operating units and much bigger in their storage units than hypothetical organic ones.
Unfortunatly chemical reactions on that scale depend on propabilistic effects and are thus rather slow, to achieve confident results. I'm excited wht tricks the scientists will bring on, to ovecome that problem.
posted on March 27th, 2009, 11:29 pm
Last edited by Dominus_Noctis on March 27th, 2009, 11:37 pm, edited 1 time in total.
Computer Made from DNA and Enzymes
(for a first hit example)
The data density within DNA is amazingly high (consider our own genome with 30-40,000 genes) and the reactions are also much faster then a silicon-based computer (you'd better hope that'd be the case, otherwise none of us would be here
). No current silicon computer can compete with a genome either in size nor in storage by the way.
What is DNA computer? - A Word Definition From the Webopedia Computer Dictionary
...a lot more than 10 bits per nm of DNA as you can see (also, if you notice a DNA computer does not have to rely on just 4 possible values... synthesizing multiple strands gives far more computing power)
(for a first hit example)
The data density within DNA is amazingly high (consider our own genome with 30-40,000 genes) and the reactions are also much faster then a silicon-based computer (you'd better hope that'd be the case, otherwise none of us would be here

What is DNA computer? - A Word Definition From the Webopedia Computer Dictionary
...a lot more than 10 bits per nm of DNA as you can see (also, if you notice a DNA computer does not have to rely on just 4 possible values... synthesizing multiple strands gives far more computing power)
Who is online
Users browsing this forum: Bing [Bot], Yandex [Bot] and 9 guests