Google creates their own inhouse Linux OS for just about all devices; mobile up to HPC. If they are not one of the main contributors to creating a better Linux kernal, than I don't know who would be. You may not directly see Google influence Linux development, but I guarantee that they play a major role.
@Kommander
The mill architecture is interesting, but VLIW type ISA's have been used before. Let me remind you of the last time we saw this type of architecture though: Itanium. As great as VLIW could be for single-threaded CPUs, the round table, and the fully populating a pipeline is very difficult to do. Any instruction that relies on another, would automatically cause a large overheads to be created inside the CPU.
This part really concerns me. If the pipeline has to be flushed for a miss-hit, than this could create major overhead. Deep pipelines rely on accurate predictions based on past executions. Not to mention the amount of cache, and buffers needed to keep the pipeline populated will cause the CPU to increase in size rapidly.
This seals the deal. With Intel basically strong arming just about every compiler company out there, this makes the Mill CPU only available for specific areas of development. Furthermore, it will require specialized low-level programmers to optimize all code produced for the Mill CPU. This is not ideal at all.
@Kommander
The mill architecture is interesting, but VLIW type ISA's have been used before. Let me remind you of the last time we saw this type of architecture though: Itanium. As great as VLIW could be for single-threaded CPUs, the round table, and the fully populating a pipeline is very difficult to do. Any instruction that relies on another, would automatically cause a large overheads to be created inside the CPU.
The timing hazards from branches and memory access are said to be handled using speculative execution, pipelining and other late-binding but statically-scheduled logic.
Source = https://en.wikipedia.org/wiki/Mill_CPU_Architecture
This part really concerns me. If the pipeline has to be flushed for a miss-hit, than this could create major overhead. Deep pipelines rely on accurate predictions based on past executions. Not to mention the amount of cache, and buffers needed to keep the pipeline populated will cause the CPU to increase in size rapidly.
Therefore, the Mill architecture is designed as a compiler target for highly-optimizing compilers.
This seals the deal. With Intel basically strong arming just about every compiler company out there, this makes the Mill CPU only available for specific areas of development. Furthermore, it will require specialized low-level programmers to optimize all code produced for the Mill CPU. This is not ideal at all.
Last edited: