Jump to content
Genetry Solar Forums

Windows is BETTER than Linux (Cant change my mind)


Recommended Posts

Considering that the majority of all computer systems and SOC devices throughout the world, that use a base operating system, are running either a customized flavor of GNU-Linux or BSD, I would say that answers the question of which is better. 

The only reason people still use windows is because of market share for desktop operating systems.  People are used to windows, or they use it for work and can't be bothered to learn a whole new OS even if it would end up being better.  Windows 10 isn't terrible once you spend the 1-2 hours it generally takes after a fresh install to get rid of all of the advertising, telemetry/tracking junk, risky/unneeded services and all the bloat that is not needed that M$ feels they need to shove down our throats.

Linux and BSD have the problem of not having enough adoption/market share on the desktop to force more software and hardware companies to release versions of their software and drivers for Linux and BSD.

If you have an android based device or a Chromebook, then you are using Google's customized version of Linux.  If you are using an iOS device or OSX then you are using Crapple's customized version of a BSD based OS.  Most home WIFI routers also run a stripped down flavor of Linux or BSD.

  • Thanks 1
Link to comment
Share on other sites

Windows Throughout The Years

I second this!  Last Windows OS I could tolerate (or even partially enjoy) was Win 7.  After that, it's all downhill from there.

EDIT: I run Ubuntu 18.04 LTS x64 at the time of this post.  Need to upgrade to 20.04 LTS...but don't feel like reinstalling everything just yet.

Link to comment
Share on other sites

Posted (edited)
6 hours ago, Sid Genetry Solar said:

 Last Windows OS I could tolerate (or even partially enjoy) was Win 7.  After that, it's all downhill from there.

XP is where I got off the lolsoft ship. Windows ME seemed like peek awful, then the switch to the NT kernel really stabilized things for a while, only for Vista came along and consumed half the resources on your modern high end gaming rig... which btw can it run Crysis?

A lot of people who discover Linux evangelize hard, I used to, but don't anymore. Just tends to annoy others and cause me more work =). Some unfortunate news for @Sean Genetry Solar , like it or not Microsoft is adopting a lot of Linux tech ( https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/linux-containers ) even into their core OS ( https://docs.microsoft.com/en-us/windows/wsl/install-win10 ) that professionals and businesses are demanding ( https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/ ) . 
Also don't underestimate the effect of Apple switching to the M1 chip or NVIDIA buying ARM, its quite possible that consumer grade computing is on the verge of shifting architectures to ARM wholesale, then Windows will at least temporarily lose its greatest advantage- its immense historical software library. Windows as you know it has only run on a handful of architectures in its history whereas Linux has and continues to run on most prolific architectures, even some obscure ones, it is definitely better poised to compete on ARM than Windows is.

Its really kind of silly to compare Windows to Linux... One is a kernel, the other is kernel + userspace + UI + etc... Microsoft could even get out of the game of OSes altogether, opting to ship a "Windows" running the Linux kernel with their own UI on top, and instead contribute to WINE. I'm surprised it doesn't occur to many that WINE could eventually surpass the official Win32/Win64 API implementation entirely, in fact it has in some ways already (like backwards compatibility).

inevitability.jpg.c512bba9833bf92c8e12199e809f3ff0.jpg

Edited by kazetsukai
  • Like 1
Link to comment
Share on other sites

1 hour ago, kazetsukai said:

Also don't underestimate the effect of Apple switching to the M1 chip or NVIDIA buying ARM, its quite possible that consumer grade computing is on the verge of shifting architectures to ARM wholesale, then Windows will at least temporarily lose its greatest advantage- its immense historical software library. Windows as you know it has only run on a handful of architectures in its history whereas Linux has and continues to run on most prolific architectures, even some obscure ones, it is definitely better poised to compete on ARM than Windows is.

I actually worry about the future of computing if we move away from full fat CPUs over to RISC and SMT platforms due to the limitations and bottlenecks it will cause down the road for those who actually use their computers for more than consuming content online and basic tasks that often lend themselves well to being parallelized.  The single biggest reason that the M1 and ARM based chips looks so good on paper is that they are substantially simpler in design because so much has been taken away.  They don't appear to be testing them apples to apples for all instruction sets that you have available to you on X86/64 based CPUs so they are only showcasing the things that they can run natively and all native tasks lend themselves well to heavy simultaneous multithreading.  The minute you have to emulate or add a virtualization layer on top to run an instruction set that isn't supported you realize that they are not a replacement for X86/64 in any real way.  

That being said, because they have a reduced set of instructions they are able to focus more on parallelization which is where pretty much all of the performance is coming from.  With all of the parallelization they are also able to reduce the clock speed substantially to lower power consumption drastically, but that also means any task that can't be parallelized or requires many cycles will take far longer to complete.  

Just imagine how fast the M1 could be if it was able to hit 5+Ghz on air but then it would still have the same limitations it has because of the architecture, plus added timing issues, and the power consumption would probably be on the order of 500+ watts due to the transistor count.

Link to comment
Share on other sites

2 minutes ago, kuhrd said:

I actually worry about the future of computing if we move away from full fat CPUs over to RISC and SMT platforms due to the limitations and bottlenecks it will cause down the road for those who actually use their computers for more than consuming content online and basic tasks that often lend themselves well to being parallelized.

Performance and parallelization run hand-in-hand if you have lots of computation to do. There's little reason for instance, rendering pixels from a scene cannot be heavily parallelized.  Or cryptography, or crunching data in big files.  I guess the reasonable steel-man would be to ask what kind of tasks need to be done on general purpose computers that cannot be parallelized well, and why. Is it the task that cannot be parallelized, or is it the technique accomplishing the task that is unsuitable for parallelization?

2 minutes ago, kuhrd said:

The single biggest reason that the M1 and ARM based chips looks so good on paper is that they are substantially simpler in design because so much has been taken away.  They don't appear to be testing them apples to apples for all instruction sets that you have available to you on X86/64 based CPUs so they are only showcasing the things that they can run natively and all native tasks lend themselves well to heavy simultaneous multithreading.

I'd argue the extensive number of special instructions illustrate a weakness in X86, not a strength. Lets say for argument's sake some specific ARM chip manufacturer added transistors to accelerate SSE2 or MMX extensions, its the same thing. The question is, for the mass majority of use cases, do you really need all of those extensions? Or did those extensions come from use cases where X86 fell short?

I must be misunderstanding you- Lets say you have some program written in C. Of course you need to compare the native compiled ARM code performance on the ARM chip to native compiled X86 code on X86 variants to get an apples-to-apples comparison, not how well the ARM chip will execute the kind of instructions present in the X86 binaries.

2 minutes ago, kuhrd said:

The minute you have to emulate or add a virtualization layer on top to run an instruction set that isn't supported you realize that they are not a replacement for X86/64 in any real way.  

Again I must be missing something in your reasoning- Does this apply in reverse? Does X86 emulate / virtualize ARM instruction sets flawlessly at native speeds, cause I never got that memo.

2 minutes ago, kuhrd said:

That being said, because they have a reduced set of instructions they are able to focus more on parallelization which is where pretty much all of the performance is coming from.  With all of the parallelization they are also able to reduce the clock speed substantially to lower power consumption drastically, but that also means any task that can't be parallelized or requires many cycles will take far longer to complete.  

Just imagine how fast the M1 could be if it was able to hit 5+Ghz on air but then it would still have the same limitations it has because of the architecture, plus added timing issues, and the power consumption would probably be on the order of 500+ watts due to the transistor count.

Does parallelization come from somewhere other than just more (redundancy) cores? I think the performance comes from IPC. In if one of those 5 billion cycles (5Ghz), you complete one instruction on ARM, but on an X86 chip it takes even just two cycles to complete the same instruction- the ARM chip is going to win by a theoretical 100% margin. I think this is the potential RISC offers- to complete instructions with less cycles than CISC architectures. If you need to do complex work, don't try to make it happen at the instruction set level- make your instruction set as simple, as cheap as possible in terms of cycles consumed and then leave the complex computations to the programmers and their code, where they can parallelize. Its not a functional difference as much as it is a philosophical difference.

Disclaimer: I'm no¬†a chip/instruction set expert, I'm a Java programmer, don't even do memory management¬†ūüėÉ

Link to comment
Share on other sites

The original RISC concept was a rigid simple ISA specifically intended for high instruction throughput and easy parallelism resulting from the much simpler design of the execution unit.  Instructions generally execute in a single clock cycle once all the data is available.  Memory accesses are all aligned with the native bit width of the CPU, 32, 16, 64 bits etc.  All the massive amount of silicon that current goes into parallelising and optimising (out of order execution and that ails it, hellllo spectre etc) the x86 ISA is not needed.  Even the current ARM ISA is bloated by genuine RISC standards.  The heavy lifting of code optimisation and parallelisation was moved to the compiler where it could evolve and develop without changes to the silicon.

Anything CISC can do RISC can do too, and vice-versa.  The difference in performance comes down to the hardware and software engineers.

  • Like 1
Link to comment
Share on other sites

On 6/4/2021 at 11:31 AM, Sid Genetry Solar said:

Windows Throughout The Years

I second this!  Last Windows OS I could tolerate (or even partially enjoy) was Win 7.  After that, it's all downhill from there.

EDIT: I run Ubuntu 18.04 LTS x64 at the time of this post.  Need to upgrade to 20.04 LTS...but don't feel like reinstalling everything just yet.

Hey, you are missing a few cases of Broken Windows. WFWG 3.11 was a good one until they broke just about everything in Lose 95. And then there was Windows ME and Bob. And what about Win 2K Pro which actually worked decent.

Link to comment
Share on other sites

2 minutes ago, Waterman said:

Hey, you are missing a few cases of Broken Windows. WFWG 3.11 was a good one until they broke just about everything in Lose 95. And then there was Windows ME and Bob. And what about Win 2K Pro which actually worked decent.

Well, I hav'ta admit, it's kinda hard to frame a broken pile of glass ūü§£.

  • Like 1
Link to comment
Share on other sites

Windows 98 was a pretty good OS which was destroyed by windows ME. Then Windows XP was awesome, after a couple of service packs. It was too good. It just kept going and going. MS learned their lesson. Never again would they make such a sturdy and long lasting OS. XP still runs my CNC machine. The source code has been leaked but it would be nice if MS would officially release the code so that the amateur community could construct a truly great Windows-like OS.

Link to comment
Share on other sites

React is a nice idea but I think I'll be dead from old age before they get to a release version and TBH WINE on *nix will meet most people's needs to run a Windows program without Windows.

IMO Microsoft releasing the source code for anything close to a current version of Windows would only happen just before they drop Windows. perhaps moving to a compatibility layer in linux etc.  'Next' versions of Windows aren't built in a vacuum.  It's an evolutionary process rather than revolutionary, to the point where a vulnerability found in a current version of Windows has a pretty high chance of existing in a discontinued version if that vulnerability isn't in some truly new thing that didn't exist in any form in the old version.  Releasing, say, Windows NT3.1 source code would still give away far too many secrets, methods and concepts that exist today in Windows 10.

  • Like 2
Link to comment
Share on other sites

The biggest reason that Windows XP was so successful is that Windows XP was built upon a rewrite of the then very stable and robust window 2000 kernel and base OS and people became used to using it at work or at school.  Windows XP actually inherited most of the driver support of Windows 2000.  Windows ME was mostly a flop because it suffered from all of the driver support issues of the windows 98SE and windows 95 eras since it was based on the same code base of it's predecessors.  Windows XP basically marked the turning point when M$ decided having one kernel and OS subsystem software stack was a far better for all versions of windows moving forward than trying to maintain and fix everyone else's broken drivers and code.  Windows Vista brought about the further push to make and driver architecture better across all devices that were being supported in windows along with a lot of improvements in software and memory management.  The nice thing is that you can often install drivers for a piece of hardware that hasn't had drivers since windows vista or windows 7 and it will often install and work properly even on the latest version of Windows 10.  That being said, is wonder if the next version of Windows M$ is talking about will continue with the same kernel architecture or if they will break everything and try something new.  I do hear talk of further Linux support within windows but I wonder what that will result in.

As it is now I prefer having my Linux and BSD systems completely separate from my windows systems and don't really need everything to be the same as long as filesystem support is there and the systems can at least talk to each other.

Link to comment
Share on other sites

You could take a miniport driver from NT3.5 and install it on NT4 and Windows 2000 too.  The layers above, and below the driver would recognise that the driver didn't import / export certain hooks / functions and not try to use them.  Cutler really did his homework when the basic framework of NT was penned.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

√ó
√ó
  • Create New...