Cheap Bitcoin Mining: $ 1,567 Cost Per Bitcoin - TokenMantra

hahhaa doom go brrrr

The date was June 10, 2018. The sun was shining, the grass was growing, and the birds were singing. At least, that’s what I assumed. Being a video game and tech obsessed teenager, I was indoors, my eyes glued to my computer monitor like a starving lion spying on a plump gazelle. I was watching the E3 (Electronic Entertainment Expo) 2018 broadcast on twitch.com, a popular streaming website. Video game developers use E3 as an annual opportunity to showcase any upcoming video game projects to the public. So far, the turnout had been disappointing. Much to my disappointment, multiple game developers failed to unveil anything of actual sustenance for an entire two hours. A graphical update here, a bug fix there. Issues that should have been fixed at every game’s initial launch, not a few months after release. Feeling hopeless, I averted my eyes from my computer monitor to check Reddit (a social media app/website) if there were any forum posts that I had yet to see. But then, I heard it. The sound of music composer Mick Gordon’s take on the original “DooM” theme, the awesome combination of metal and electronic music. I looked up at my screen and gasped. Bethesda Softworks and id software had just announced “DOOM: Eternal”, the fifth addition in the “DooM” video game series. “DOOM: Eternal” creative director Hugo Martin promised that the game would feel more powerful than it’s 2016 predecessor, there would be twice as many enemy types, and the doom community would finally get to see “hell on earth”. (Martin) As a fan of “DOOM (2016)”, I was ecstatic. The original “DooM” popularized the “First Person Shooter (FPS)” genre, and I wished I wouldn’t have to wait to experience the most recent entry in the series. “DOOM(1993)” was a graphical landmark when it originally released, yet nowadays it looks extremely dated, especially compared to “DOOM: Eternal”. What advancements in computer technology perpetuated this graphical change? Computers became faster, digital storage increased, and computer peripherals were able to display higher resolution and refresh rates.
“DooM” 1993 graphics example:
📷(Doom | Doom Wiki)
“DOOM: Eternal” graphics example:
📷
(Bailey)
In their video “Evolution Of DOOM”, the video game YouTube Channel “gameranx” says that on December 10, 1993, a file titled “DOOM1_0.zip” was uploaded on the File Transfer Protocol (FTP) server of the University of Wisconsin. This file, two megabytes in size, contained the video game “DooM” created by the game development group “id Software”. (Evolution of DOOM) While not the first game in the “First Person Shooter” (FPS) genre, “DooM” popularized the genre, to the point of any other FPS game being referred to as a “Doom Clone” until the late 1990s. (Doom clones | Doom Wiki) The graphics of the original “DooM” is definitely a major downgrade compared to today’s graphical standards, but keep in mind that the minimum system requirements of “DooM”, according to the article “Doom System Requirements” on gamesystemrequirements.com, was eight megabytes of ram, an Intel Pentium or AMD (Advanced Micro Devices) Athlon 486 processor cycling at sixty-six megahertz or more, and an operating system that was Windows 95 or above. (Doom System Requirements) In case you don’t speak the language of technology (although I hope you learn a thing or two at the end of this essay), the speed and storage capacity is laughable compared to the specifications of today. By 1993, the microprocessor, or CPU (Central Processing Unit) had been active for the past twenty-two years after replacing the integrated circuit in 1971, thanks to the creators of the microprocessor, Robert Noyce and Gordon Moore who were also the founder of CPU manufacturer “Intel”. Gordon Moore also created “Moore’s law”, which states “The number of transistors incorporated in a chip will approximately double every 24 months”. (Moore) Sadly, according to writer and computer builder Steve Blank in his article “The End of More - The Death of Moore’s Law”, this law would end at around 2005, thanks to the basic laws of physics. (Blank) 1993 also marked an important landmark for Intel, who just released the first “Pentium” processor which was capable of a base clock of 60 MHz (megahertz). The term “base clock” refers to the default speed of a CPU. This speed can be adjusted via the user’s specifications, and “MHz” refers to one million cycles per second. A cycle is essentially one or more problems that the computer solves. The more cycles the CPU is running at, the more problems get solved. Intel would continue upgrading their “Pentium” lineup until January 4, 2000 when they would release the “Celeron” processor, with a base clock of 533 MHz. Soon after, on June 19, 2000, rival CPU company AMD would release their “Duron” processor which had a base clock of 600 MHz, with a maximum clock of 1.8 GHz (Gigahertz). One GHz is equal to 1,000 MHz. Intel and AMD had established themselves as the two major CPU companies in the 1970s in Silicon Valley. Both companies had been bitter rivals since then, trading figurative blows in the form of competitive releases, discounts, and “one upmanship” to this day. Moving on to April 21, 2005 when AMD released the first dual-core CPU, the “Athlon 64 X2 3800+”. The notable feature of this CPU, besides a 2.0 GHz base clock and a 3.8 maximum clock, was that it was the first CPU to have two cores. A CPU core is a CPU’s processor. The more cores a CPU has, the more tasks it can perform per cycle, thus maximizing it’s efficiency. Intel wouldn’t respond until January 9, 2006, when they released their dual-core processor, the “Core 2 Duo Processor E6320”, with a base clock of 1.86 GHz. (Computer Processor History) According to tech entrepreneur Linus Sebastian in his YouTube videos “10 Years of Gaming PCs: 2009 - 2014 (Part 1)” and “10 Years of Gaming PCs: 2015 - 2019 (Part 2)”, AMD would have the upper hand over Intel until 2011, when Intel released the “Sandy Bridge” CPU microarchitecture, which was faster and around the same price as AMD’s current competing products. (Sebastian) The article “What is Microarchitecture?” on the website Computer Hope defines microarchitecture as “a hardware implementation of an ISA (instruction set architecture). An ISA is a structure of commands and operations used by software to communicate with hardware. A microarchitecture is the hardware circuitry that implements one particular ISA”. (What is Microarchitecture?) Microarchitecture is also referred to as what generation a CPU belongs to. Intel would continue to dominate the high-end CPU market until 2019, when AMD would “dethrone” Intel with their third generation “Ryzen” CPU lineup. The most notable of which being the “Ryzen 3950x”, which had a total of sixteen cores, thirty-two threads, a base clock of 3.5 GHz, and a maximum clock of 4.7 GHz. (Sebastian) The term “thread” refers to splitting one core into virtual cores, via a process known as “simultaneous multithreading”. Simultaneous multithreading allows one core to perform two tasks at once. What CPU your computer has is extremely influential for how fast your computer can run, but for video games and other types of graphics, there is a special type of processor that is designed specifically for the task of “rendering” (displaying) and generating graphics. This processor unit is known as the graphics processing unit, or “GPU”. The term “GPU” wasn’t used until around 1999, when video cards started to evolve beyond the literal generation of two-dimensional graphics and into the generation of three-dimensional graphics. According to user “Olena” in their article “A Brief History of GPU”, The first GPU was the “GeForce 256”, created by GPU company “Nvidia'' in 1999. Nvidia promoted the GeForce 256 as “A single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second”. (Olena) Unlike the evolution of CPUs, the history of GPUs is more one sided, with AMD playing a game of “catchup” ever since Nvidia overtook AMD in the high-end GPU market in 2013. (Sebastian) Fun fact, GPUs aren’t used only for gaming! In 2010, Nvidia collaborated with Audi to power the dashboards and increase the entertainment and navigation systems in Audi’s cars! (Olena) Much to my (and many other tech enthusiasts), GPUs would increase dramatically in price thanks to the “bitcoin mania” around 2017. This was, according to senior editor Tom Warren in his article “Bitcoin Mania is Hurting PC Gamers By Pushing Up GPU Prices'' on theverge.com, around an 80% increase in price for the same GPU due to stock shortages. (Warren) Just for context, Nvidia’s “flagship” gpu in 2017 was the 1080ti, the finest card of the “pascal” microarchitecture. Fun fact, I have this card. The 1080ti launched for $699, with the specifications of a base clock of 1,481 MHz, a maximum clock of 1,582 MHz, and 11 gigabytes of GDDR5X Vram (Memory that is exclusive to the GPU) according to the box it came in. Compare this to Nvidia’s most recent flagship GPU, the 2080ti of Nvidia’s followup “Turing” microarchitecture, another card I have. This GPU launched in 2019 for $1,199. The 2080ti’s specifications, according to the box it came in included a base clock of 1,350 MHz, a maximum clock of 1,545 MHz, and 11 gigabytes of GDDR6 Vram.
A major reason why “DooM” was so popular and genius was how id software developer John Carmack managed to “fake” the three-dimensional graphics without taking up too much processing power, hard drive space, or “RAM” (Random access memory), a specific type of digital storage. According to the article “RAM (Random Access Memory) Definition” on the website TechTerms, Ram is also known as “volatile” memory, because it is much faster than normal storage (which at the time took the form of hard-drive space), and unlike normal storage, only holds data when the computer is turned on. A commonly used analogy is that Ram is the computer’s short-term memory, storing temporary files to be used by programs, while hard-drive storage is the computer’s long term memory. (RAM (Random Access Memory) Definition) As I stated earlier, in 1993, “DooM” required 8 megabytes of ram to run. For some context, as of 2020, “DOOM: Eternal” requires a minimum of 8 gigabytes of DDR4 (more on this later) ram to run, with most gaming machines possessing 16 gigabytes of DDR4 ram. According to tech journalist Scott Thornton in his article “What is DDR (Double Data Rate) Memory and SDRAM Memory”, in 1993, the popular format of ram was “SDRAM”. “SDRAM” stands for “Synchronous Dynamic Random Access Memory”. SDRAM differs from its predecessor, “DRAM” (Dynamic Random Access Memory) by being synchronized with the clock speed of the CPU. DRAM was asynchronous (not synchronized by any external influence), which “posted a problem in organizing data as it comes in so it can be queued for the process it’s associated with”. SDRAM was able to transfer data one time per clock cycle, and it’s replacement in the early 2000s, “DDR SDRAM” (Dual Data Rate Synchronous Dynamic Random Access Memory) was able to transfer data two times per clock cycle. This evolution of ram would continue to this day. In 2003, DDR2 SDRAM was released, able to transfer four pieces of data per clock cycle. In 2007, DDR3 SDRAM was able to transfer eight pieces of data per clock cycle. In 2014, DDR4 SDRAM still was able to transfer eight pieces of data per cycle, but the clock speed had increased by 600 MHz, and the overall power consumption had been reduced from 3.3 volts for the original SDRAM to 1.2 volts for DDR4. (Thornton)The digital size of each “ram stick” (a physical stick of ram that you would insert into your computer) had also increased, from around two megabytes per stick, to up to 128 gigabytes per stick (although this particular option will cost you around $1,000 per stick depending on the manufacturer) in 2020, although the average stick size is 8 gigabytes. For the average computer nowadays, you can insert up to four ram sticks, although for more high-end systems, you can insert up to sixteen or even thirty-two! Rewind back to 1993, where the original “DooM” took up two megabytes of storage, not to be confused with ram. According to tech enthusiast Rex Farrance in their article “Timeline: 50 Years of Hard Drives”, the average computer at this time had around two gigabytes of storage. Storage took the form of magnetic-optical discs, a combination of the previous magnetic discs and optical discs. (Farrance) This format of storage is still in use today, although mainly for large amounts of rarely used data, while data that is commonly used by programs (including the operating system) is put on solid-state drives, or SSDs. According to tech journalist Keith Foote in their article “A Brief History of Data Storage”, SSDs differed from the HDD by being much faster and smaller, storing data on a flash memory chip, not unlike a USB thumb drive. While SSDs had been used as far back as 1950, they wouldn’t find their way into the average gaming machine until the early 2010s. (Foote) A way to think about SSDs is common knowledge. It doesn’t contain every piece of information you know, it just contains what you use on a daily basis. For example, my computer has around 750 gigabytes of storage in SSDs, and around two terabytes of internal HDD storage. On my SSDs, I have my operating system, my favorite programs and games, and any files that I use frequently. On my HDD, I have everything else that I don’t use on a regular basis.
“DOOM: Eternal” would release on March 20, 2020, four months after it’s original release date on November 22, 2019. And let me tell you, I was excited. The second my clock turned from 11:59 P.M. to 12:00 A.M., I repeatedly clicked my refresh button, desperately waiting to see the words “Coming March 20” transform into the ever so beautiful and elegant phrase: “Download Now”. At this point in time, I had a monitor that was capable of displaying roughly two-million pixels spread out over it’s 27 inch display panel, at a rate of 240 times a second. Speaking of monitors and displays, according to the article “The Evolution of the Monitor” on the website PCR, at the time of the original “DooM” release, the average monitor was either a CRT (cathode ray tube) monitor, or the newer (and more expensive) LCD (liquid crystal display) monitor. The CRT monitor was first unveiled in 1897 by the German physicist Karl Ferdinand Braun. CRT monitors functioned by colored cathode ray tubes generating an image on a phosphorescent screen. These monitors would have an average resolution of 800 by 600 pixels and a refresh rate of around 30 frames per second. CRT monitors would eventually be replaced by LCD monitors in the late 2000s. LCD monitors functioned by using two pieces of polarized glass with liquid crystal between them. A backlight would shine through the first piece of polarized glass (also known as substrate). Electrical currents would then cause the liquid crystals to adjust how much light passes through to the second substrate, which creates the images that are displayed. (The Evolution of the Monitor) The average resolution would increase to 1920x1080 pixels and the refresh rate would increase to 60 frames a second around 2010. Nowadays, there are high end monitors that are capable of displaying up to 7,680 by 4,320 pixels, and also monitors that are capable of displaying up to 360 frames per second, assuming you have around $1,000 lying around.
At long last, it had finished. My 40.02 gigabyte download of “DOOM: Eternal” had finally completed, and oh boy, I was ready to experience this. I ran over to my computer, my beautiful creation sporting 32 gigs of DDR4 ram, an AMD Ryzen 7 “3800x” with a base clock of 3.8 GHz, an Nvidia 2080ti, 750 gigabytes of SSD storage and two terabytes of HDD storage. Finally, after two years of waiting for this, I grabbed my mouse, and moved my cursor over that gorgeous button titled “Launch DOOM: Eternal”. Thanks to multiple advancements in the speed of CPUs, the size of ram and storage, and display resolution and refresh rate, “DooM” had evolved from an archaic, pixelated video game in 1993 into the beautiful, realistic and smooth video game it is today. And personally, I can’t wait to see what the future has in store for us.
submitted by Voxel225 to voxelists [link] [comments]

Technical Cryptonight Discussion: What about low-latency RAM (RLDRAM 3, QDR-IV, or HMC) + ASICs?

The Cryptonight algorithm is described as ASIC resistant, in particular because of one feature:
A megabyte of internal memory is almost unacceptable for the modern ASICs. 
EDIT: Each instance of Cryptonight requires 2MB of RAM. Therefore, any Cryptonight multi-processor is required to have 2MB per instance. Since CPUs are incredibly well loaded with RAM (ie: 32MB L3 on Threadripper, 16 L3 on Ryzen, and plenty of L2+L3 on Skylake Servers), it seems unlikely that ASICs would be able to compete well vs CPUs.
In fact, a large number of people seem to be incredibly confident in Cryptonight's ASIC resistance. And indeed, anyone who knows how standard DDR4 works knows that DDR4 is unacceptable for Cryptonight. GDDR5 similarly doesn't look like a very good technology for Cryptonight, focusing on high-bandwidth instead of latency.
Which suggests only an ASIC RAM would be able to handle the 2MB that Cryptonight uses. Solid argument, but it seems to be missing a critical point of analysis from my eyes.
What about "exotic" RAM, like RLDRAM3 ?? Or even QDR-IV?

QDR-IV SRAM

QDR-IV SRAM is absurdly expensive. However, its a good example of "exotic RAM" that is available on the marketplace. I'm focusing on it however because QDR-IV is really simple to describe.
QDR-IV costs roughly $290 for 16Mbit x 18 bits. It is true Static-RAM. 18-bits are for 8-bits per byte + 1 parity bit, because QDR-IV is usually designed for high-speed routers.
QDR-IV has none of the speed or latency issues with DDR4 RAM. There are no "banks", there are no "refreshes", there are no "obliterate the data as you load into sense amplifiers". There's no "auto-charge" as you load the data from the sense-amps back into the capacitors.
Anything that could have caused latency issues is gone. QDR-IV is about as fast as you can get latency-wise. Every clock cycle, you specify an address, and QDR-IV will generate a response every clock cycle. In fact, QDR means "quad data rate" as the SRAM generates 2-reads and 2-writes per clock cycle. There is a slight amount of latency: 8-clock cycles for reads (7.5nanoseconds), and 5-clock cycles for writes (4.6nanoseconds). For those keeping track at home: AMD Zen's L3 cache has a latency of 40 clocks: aka 10nanoseconds at 4GHz
Basically, QDR-IV BEATS the L3 latency of modern CPUs. And we haven't even begun to talk software or ASIC optimizations yet.

CPU inefficiencies for Cryptonight

Now, if that weren't bad enough... CPUs have a few problems with the Cryptonight algorithm.
  1. AMD Zen and Intel Skylake CPUs transfer from L3 -> L2 -> L1 cache. Each of these transfers are in 64-byte chunks. Cryptonight only uses 16 of these bytes. This means that 75% of L3 cache bandwidth is wasted on 48-bytes that would never be used per inner-loop of Cryptonight. An ASIC would transfer only 16-bytes at a time, instantly increasing the RAM's speed by 4-fold.
  2. AES-NI instructions on Ryzen / Threadripper can only be done one-per-core. This means a 16-core Threadripper can at most perform 16 AES encryptions per clock tick. An ASIC can perform as many as you'd like, up to the speed of the RAM.
  3. CPUs waste a ton of energy: there's L1 and L2 caches which do NOTHING in Cryptonight. There are floating-point units, memory controllers, and more. An ASIC which strips things out to only the bare necessities (basically: AES for Cryptonight core) would be way more power efficient, even at ancient 65nm or 90nm designs.

Ideal RAM access pattern

For all yall who are used to DDR4, here's a special trick with QDR-IV or RLDRAM. You can pipeline accesses in QDR-IV or RLDRAM. What does this mean?
First, it should be noted that Cryptonight has the following RAM access pattern:
QDR-IV and RLDRAM3 still have latency involved. Assuming 8-clocks of latency, the naive access pattern would be:
  1. Read
  2. Stall
  3. Stall
  4. Stall
  5. Stall
  6. Stall
  7. Stall
  8. Stall
  9. Stall
  10. Write
  11. Stall
  12. Stall
  13. Stall
  14. Stall
  15. Stall
  16. Stall
  17. Stall
  18. Stall
  19. Read #2
  20. Stall
  21. Stall
  22. Stall
  23. Stall
  24. Stall
  25. Stall
  26. Stall
  27. Stall
  28. Write #2
  29. Stall
  30. Stall
  31. Stall
  32. Stall
  33. Stall
  34. Stall
  35. Stall
  36. Stall
This isn't very efficient: the RAM sits around waiting. Even with "latency reduced" RAM, you can see that the RAM still isn't doing very much. In fact, this is why people thought Cryptonight was safe against ASICs.
But what if we instead ran four instances in parallel? That way, there is always data flowing.
  1. Cryptonight #1 Read
  2. Cryptonight #2 Read
  3. Cryptonight #3 Read
  4. Cryptonight #4 Read
  5. Stall
  6. Stall
  7. Stall
  8. Stall
  9. Stall
  10. Cryptonight #1 Write
  11. Cryptonight #2 Write
  12. Cryptonight #3 Write
  13. Cryptonight #4 Write
  14. Stall
  15. Stall
  16. Stall
  17. Stall
  18. Stall
  19. Cryptonight #1 Read #2
  20. Cryptonight #2 Read #2
  21. Cryptonight #3 Read #2
  22. Cryptonight #4 Read #2
  23. Stall
  24. Stall
  25. Stall
  26. Stall
  27. Stall
  28. Cryptonight #1 Write #2
  29. Cryptonight #2 Write #2
  30. Cryptonight #3 Write #2
  31. Cryptonight #4 Write #2
  32. Stall
  33. Stall
  34. Stall
  35. Stall
  36. Stall
Notice: we're doing 4x the Cryptonight in the same amount of time. Now imagine if the stalls were COMPLETELY gone. DDR4 CANNOT do this. And that's why most people thought ASICs were impossible for Cryptonight.
Unfortunately, RLDRAM3 and QDR-IV can accomplish this kind of pipelining. In fact, that's what they were designed for.

RLDRAM3

As good as QDR-IV RAM is, its way too expensive. RLDRAM3 is almost as fast, but is way more complicated to use and describe. Due to the lower cost of RLDRAM3 however, I'd assume any ASIC for CryptoNight would use RLDRAM3 instead of the simpler QDR-IV. RLDRAM3 32Mbit x36 bits costs $180 at quantities == 1, and would support up to 64-Parallel Cryptonight instances (In contrast, a $800 AMD 1950x Threadripper supports 16 at the best).
Such a design would basically operate at the maximum speed of RLDRAM3. In the case of x36-bit bus and 2133MT/s, we're talking about 2133 / (Burst Length4 x 4 read/writes x 524288 inner loop) == 254 Full Cryptonight Hashes per Second.
254 Hashes per second sounds low, and it is. But we're talking about literally a two-chip design here. 1-chip for RAM, 1-chip for the ASIC/AES stuff. Such a design would consume no more than 5 Watts.
If you were to replicate the ~5W design 60-times, you'd get 15240 Hash/second at 300 Watts.

RLDRAM2

Depending on cost calculations, going cheaper and "making more" might be a better idea. RLDRAM2 is widely available at only $32 per chip at 800 MT/s.
Such a design would theoretically support 800 / 4x4x524288 == 95 Cryptonight Hashes per second.
The scary part: The RLDRAM2 chip there only uses 1W of power. Together, you get 5 Watts again as a reasonable power-estimate. x60 would be 5700 Hashes/second at 300 Watts.
Here's Micron's whitepaper on RLDRAM2: https://www.micron.com/~/media/documents/products/technical-note/dram/tn4902.pdf . RLDRAM3 is the same but denser, faster, and more power efficient.

Hybrid Cube Memory

Hybrid Cube Memory is "stacked RAM" designed for low latency. As far as I can tell, Hybrid Cube memory allows an insane amount of parallelism and pipelining. It'd be the future of an ASIC Cryptonight design. The existence of Hybrid Cube Memory is more about "Generation 2" or later. In effect, it demonstrates that future designs can be lower-power and give higher-speed.

Realistic ASIC Sketch: RLDRAM3 + Parallel Processing

The overall board design would be the ASIC, which would be a simple pipelined AES ASIC that talks with RLDRAM3 ($180) or RLDRAM2 ($30).
Its hard for me to estimate an ASIC's cost without the right tools or design. But a multi-project wafer like MOSIS offers "cheap" access to 14nm and 22nm nodes. Rumor is that this is roughly $100k per run for ~40 dies, suitable for research-and-development. Mass production would require further investments, but mass production at the ~65nm node is rumored to be in the single-digit $$millions or maybe even just 6-figures or so.
So realistically speaking: it'd take ~$10 Million investment + a talented engineer (or team of engineers) who are familiar with RLDRAM3, PCIe 3.0, ASIC design, AES, and Cryptonight to build an ASIC.

TL;DR:

submitted by dragontamer5788 to Monero [link] [comments]

[META] New to PC Building? - September 2018 Edition

Intro

You've heard from all your gaming friends/family or co-workers that custom PCs are the way to go. Or maybe you've been fed up with your HP, Dell, Acer, Gateway, Lenovo, etc. pre-builts or Macs and want some more quality and value in your next PC purchase. Or maybe you haven't built a PC in a long time and want to get back into the game. Well, here's a good place to start.

Instructions

  1. Make a budget for your PC (e.g., $800, $1000, $1250, $1500, etc.).
  2. Decide what you will use your PC for.
    • For gaming, decide what games and at what resolution and FPS you want to play at.
    • For productivity, decide what software you'll need and find the recommended specs to use those apps.
    • For a bit of both, your PC build should be built on the HIGHEST specs recommended for your applications (e.g., if you only play FortNite and need CPU power for CFD simulations, use specs recommended for CFD).
    Here are some rough estimates for builds with entirely NEW parts:
    1080p 60FPS ultra-settings modern AAA gaming: ~$1,200
    1440p 60FPS high/ultra-settings modern AAA gaming: ~$1,600
    1080p 144FPS ultra-settings modern AAA gaming: $2,000
    4K 50FPS medium/high-settings modern AAA gaming: > $2,400
    It's noted that some compromises (e.g., lower settings and/or resolution) can be made to achieve the same or slightly lower gaming experience within ±15% of the above prices. It's also noted that you can still get higher FPS on older or used PCs by lowering settings and/or resolution AND/OR buying new/used parts to upgrade your system. Make a new topic about it if you're interested.
    Also note that AAA gaming is different from e-sport games like CSGO, DOTA2, FortNite, HOTS, LoL, Overwatch, R6S, etc. Those games have lower requirements and can make do with smaller budgets.
  3. Revise your budget AND/OR resolution and FPS until both are compatible. Compare this to the recommended requirements of the most demanding game on your list. For older games, you might be able to lower your budget. For others, you might have to increase your budget.
    It helps to watch gaming benchmarks on Youtube. A good example of what you're looking for is something like this (https://www.youtube.com/watch?v=9eLxSOoSdjY). Take note of the resolution, settings, FPS, and the specs in the video title/description; ask yourself if the better gaming experience is worth increasing your budget OR if you're okay with lower settings and lowering your budget. Note that you won't be able to see FPS higher than 60FPS for Youtube videos; something like this would have to be seen in-person at a computer shop.
  4. Make a build on https://ca.pcpartpicker.com/. If you still have no idea how to put together parts, start here (http://www.logicalincrements.com/) to get an understanding of PC part tiers. If you want more info about part explanations and brief buying tips, see the next section below.
  5. Click on the Reddit logo button next to Markup, copy and paste the generated text (in markup mode if using new Reddit), and share your build for review!
  6. Consider which retailer to buy your parts from. Here's a table comparing different retailers: https://docs.google.com/spreadsheets/d/1L8uijxuoJH4mjKCjwkJbCrKprCiU8CtM15mvOXxzV1s/edit?usp=sharing
  7. Buy your parts! Use PCPP above to send you e-mail alerts on price drops or subscribe to /bapcsalescanada for deals.
    You can get parts from the following PC retailers in alphabetical order:
  8. After procuring your parts, it's time to build. Use a good Youtube tutorial like this (https://www.youtube.com/watch?v=IhX0fOUYd8Q) that teach BAPC fundamentals, but always refer to your product manuals or other Youtube tutorials for part-specific instructions like CPU mounting, radiator mounting, CMOS resetting, etc. If it everything still seems overwhelming, you can always pay a computer shop or a friend/family member to build it for you.
    It might also be smart to look up some first-time building mistakes to avoid:
  9. Share your experience with us.
  10. If you have any other questions, use the search bar first. If it's not there, make a topic.

BAPC News (Last Updated - 2018/09/20)

CPU

https://www.tomshardware.com/news/intel-9000-series-cpu-faq,37743.html
Intel 9000 CPUs (Coffee Lake Refresh) will be coming out in Q4. With the exception of i9 (8-core, 12 threads) flagship CPUs, the i3, i5, and i7 lineups are almost identical to their Intel 8000 (Coffee Lake) series, but slightly clocked faster. If you are wondering if you should upgrade to the newer CPU on the same tier (e.g., i5-8400 to i5-9400), I don't recommend that you do as you will only see marginal performance increases.

Mobo

https://www.anandtech.com/show/13135/more-details-on-intels-z390-chipset-exposed
Z370s will now be phased out for Z390s boards, which will natively support Intel 9000 CPUs (preferably i5-9600K, i7-9700K, and i9-9900K).

GPU

https://www.youtube.com/watch?v=WDrpsv0QIR0
RTX 2080 and 2080 Ti benchmarks are out; they provide ~10 and ~20 frames better than the 1080 Ti and also feature ray tracing (superior lighting and shadow effects) which is featured in only ~30 games so far (i.e., not supported a lot); effectively, they provide +25% more performance for +70% increased cost. My recommendation is NOT to buy them unless you need it for work or have lots of disposable income. GTX 1000 Pascal series are still relevant in today's gaming specs.

Part Explanations

CPU

The calculator part. More GHz is analogous to fast fingers number crunching in the calculator. More cores is analogous to having more calculators. More threads is analogous to having more filing clerks piling more work for the calculator to do. Microarchitectures (core design) is analogous to how the internal circuit inside the calculator is designed (e.g., AMD FX series are slower than Intel equivalents even with higher OC'd GHz speeds because the core design is subpar). All three are important in determining CPU speed.
In general, higher GHz is more important for gaming now whereas # cores and threads are more important for multitasking like streaming, video editing, and advanced scientific/engineering computations. Core designs from both AMD and Intel in their most recent products are very good now, but something to keep in mind.

Overclocking

The basic concept of overclocking (OCing) is to feed your CPU more power through voltage and hoping it does calculations faster. Whether your parts are good overclockers depends on the manufacturing process of your specific part and slight variations in materials and manufacturing process will result in different overclocking capability ("silicon lottery"). The downside to this is that you can void your warranties because doing this will produce excess heat that will decrease the lifespan of your parts AND that there is a trial-and-error process to finding OC settings that are stable. Unstable OC settings result in computer freezes or random shut-offs from excess heat. OCing will give you extra performance often for free or by investing in a CPU cooler to control your temperatures so that the excess heat will not decrease your parts' lifespans as much. If you don't know how to OC, don't do it.

Current Products

Intel CPUs have higher GHz than AMD CPUs, which make them better for gaming purposes. However, AMD Ryzen CPUs have more cores and threads than their Intel equivalents. The new parts are AMD Ryzen 3, 5, or 7 2000 series or Intel i3, i5, or i7 8000 series (Coffee Lake). Everything else is outdated.
If you want to overclock on an AMD system, know that you can get some moderate OC on a B350/B450 with all CPUs. X370/X470 mobos usually come with better VRMs meant for OCing 2600X, 2700, and 2700X. If you don't know how to OC, know that the -X AMD CPUs have the ability to OC themselves automatically without manually settings. For Intel systems, you cannot OC unless the CPU is an unlocked -K chip (e.g., i3-8350K, i5-8600K, i7-8700K, etc.) AND the motherboard is a Z370 mobo. In general, it is not worth getting a Z370 mobo UNLESS you are getting an i5-8600K and i7-8700K.

CPU and Mobo Compatibility

Note about Ryzen 2000 CPUs on B350 mobos: yes, you CAN pair them up since they use the same socket. You might get an error message on PCPP that says that they might not be compatible. Call the retailer and ask if the mobo you're planning on buying has a "Ryzen 2000 Series Ready" sticker on the box. This SHOULD NOT be a problem with any mobos manufactured after February 2018.
Note about Intel 9000 CPUs on B360 / Z370 mobos: same as above with Ryzen 2000 CPUs on B350 or X370 boards.

CPU Cooler (Air / Liquid)

Air or liquid cooling for your CPU. This is mostly optional unless heavy OCing on AMD Ryzen CPUs and/or on Intel -K and i7-8700 CPUs.
For more information about air and liquid cooling comparisons, see here:

Motherboard/mobo

Part that lets all the parts talk to each other. Comes in different sizes from small to big: mITX, mATX, ATX, and eATX. For most people, mATX is cost-effective and does the job perfectly. If you need more features like extra USB slots, go for an ATX. mITX is for those who want a really small form factor and are willing to pay a premium for it. eATX mobos are like ATX mobos except that they have more features and are bigger - meant for super PC enthusiasts who need the features.
If you are NOT OCing, pick whatever is cheap and meets your specs. I recommend ASUS or MSI because they have RMA centres in Canada in case it breaks whereas other parts are outside of Canada like in the US. If you are OCing, then you need to look at the quality of the VRMs because those will greatly influence the stability and lifespan of your parts.

Memory/RAM

Part that keeps Windows and your software active. Currently runs on the DDR4 platform for new builds. Go for dual channel whenever possible. Here's a breakdown of how much RAM you need:
AMD Ryzen CPUs get extra FPS for faster RAM speeds (ideally 3200MHz) in gaming when paired with powerful video cards like the GTX 1070. Intel Coffee Lake CPUs use up a max of 2667MHz for B360 mobos. Higher end Z370 mobos can support 4000 - 4333MHz RAM depending on the mobo, so make sure you shop carefully!
It's noted that RAM prices are highly inflated because of the smartphone industry and possibly artificial supply shortages. For more information: https://www.extremetech.com/computing/263031-ram-prices-roof-stuck-way

Storage

Part that store your files in the form of SSDs and HDDs.

Solid State Drives (SSDs)

SSDs are incredibly quick, but are expensive per TB; they are good for booting up Windows and for reducing loading times for gaming. For an old OEM pre-built, upgrading the PC with an SSD is the single greatest speed booster you can do to your system. For most people, you want to make sure the SSD you get is NOT DRAM-less as these SSDs do not last as long as their DRAM counterparts (https://www.youtube.com/watch?v=ybIXsrLCgdM). It is also noted that the bigger the capacity of the SSD, the faster they are. SSDs come in four forms:
The 2.5" SATA form is cheaper, but it is the old format with speeds up to 550MB/s. M.2 SATA SSDs have the same transfer speeds as 2.5" SATA SSDs since they use the SATA interface, but connect directly to the mobo without a cable. It's better for cable management to get an M.2 SATA SSD over a 2.5" SATA III SSD. M.2 PCI-e SSDs are the newest SSD format and transfer up to 4GB/s depending on the PCI-e lanes they use (e.g., 1x, 2x, 4x, etc.). They're great for moving large files (e.g., 4K video production). For more info about U.2 drives, see this post (https://www.reddit.com/bapccanada/comments/8jxfqs/meta_new_to_pc_building_may_2018_edition/dzqj5ks/). Currently more common for enterprise builds, but could see some usage in consumer builds.

Hard Disk Drives (HDDs)

HDDs are slow with transfer speeds of ~100MB/s, but are cheap per TB compared to SSDs. We are now at SATA III speeds, which have a max theoretical transfer rate of 600MB/s. They also come in 5400RPM and 7200RPM forms. 5400RPM uses slightly less power and are cheaper, but aren't as fast at dealing with a large number of small files as 7200RPM HDDs. When dealing with a small number of large files, they have roughly equivalent performance. It is noted that even a 10,000RPM HDD will still be slower than an average 2.5" SATA III SSD.

Others

SSHDs are hybrids of SSDs and HDDs. Although they seem like a good combination, it's much better in all cases to get a dedicated SSD and a dedicated HDD instead. This is because the $/speed better for SSDs and the $/TB is better for HDDs. The same can be said for Intel Optane. They both have their uses, but for most users, aren't worth it.

Overall

I recommend a 2.5" or M.2 SATA ≥ 250GB DRAM SSD and a 1TB or 2TB 7200RPM HDD configuration for most users for a balance of speed and storage capacity.

Video Card/GPU

Part that runs complex calculations in games and outputs to your monitor and is usually the most expensive part of the budget. The GPU you pick is dictated by the gaming resolution and FPS you want to play at.
In general, all video cards of the same product name have almost the same non-OC'd performance (e.g., Asus Dual-GTX1060-06G has the same performance as the EVGA 06G-P4-6163-KR SC GAMING). The different sizes and # fans DO affect GPU OCing capability, however. The most important thing here is to get an open-air video card, NOT a blower video card (https://www.youtube.com/watch?v=0domMRFG1Rw). The blower card is meant for upgrading pre-builts where case airflow is limited.
For cost-performance, go for the NVIDIA GTX cards because of the cryptomining industry that has inflated AMD RX cards. Bitcoin has taken a -20% hit since January's $10,000+ as of recently, but the cryptomining industry is still ongoing. Luckily, this means prices have nearly corrected itself to original MSRP in 2016.
In general:
Note that if your monitor has FreeSync technology, get an AMD card. If your monitor has G-Sync, get a NVIDIA card. Both technologies allow for smooth FPS gameplay. If you don't have either, it doesn't really matter which brand you get.
For AMD RX cards, visit https://www.pcworld.com/article/3197885/components-graphics/every-amd-radeon-rx-graphics-card-you-can-buy-for-pc-gaming.html

New NVIDIA GeForce RTX Series

New NVIDIA 2000 RTX series have been recently announced and will be carried in stores in Q3 and Q4. Until all of the products have been fully vetted and reviewed, we cannot recommend those yet as I cannot say if they are worth what NVIDIA has marketed them as. But they will be faster than their previous equivalents and will require more wattage to use. The 2070, 2080, and 2080 Ti will feature ray tracing, which is a new feature seen in modern CG movies that greatly enhances lighting and shadow effects. At this time, < 30 games will use ray tracing (https://www.pcgamer.com/21-games-will-support-nvidias-real-time-ray-tracing-here-are-demos-of-tomb-raider-and-control/). It's also noted that the 2080 Ti is the Titan XP equivalent, which is why it's so expensive. (https://www.youtube.com/watch?v=Irs8jyEmmPQ) The community's general recommendation is NOT to pre-order them until we see some reviews and benchmarks from reviewers first.
Looks like a couple of benchmarks are out. While keeping other parts equal the following results were obtained(https://videocardz.com/77983/nvidia-geforce-rtx-2080-ti-and-rtx-2080-official-performance-unveiled). So the 2080 and 2080 Ti are better than last generation's 1080 Ti by ~10 and ~20 frames respectively.

Case

Part that houses your parts and protects them from its environment. Should often be the last part you choose because the selection is big enough to be compatible with any build you choose as long as the case is equal to or bigger than the mobo form factor.
Things to consider: aesthetics, case airflow, cable management, material, cooling options (radiators or # of fan spaces), # fans included, # drive bays, toolless installation, power supply shroud, GPU clearance length, window if applicable (e.g., acrylic, tempered glass), etc.
It is recommended to watch or read case reviews on Youtube to get an idea of a case's performance in your setup.

Power Supply/PSU

Part that runs your PC from the wall socket. Never go with an non-reputable/cheap brand out on these parts as low-quality parts could damage your other parts. Recommended branded PSUs are Corsair, EVGA, Seasonic, and Thermaltake, generally. For a tier list, see here (https://linustechtips.com/main/topic/631048-psu-tier-list-updated/).

Wattage

Wattage depends on the video card chosen, if you plan to OC, and/or if you plan to upgrade to a more powerful PSU in the future. Here's a rule of thumb for non-OC wattages that meet NVIDIA's recommendations:
There are also PSU wattage calculators that you can use to estimate your wattage. How much wattage you used is based on your PC parts, how much OCing you're doing, your peripherals (e.g., gaming mouse and keyboard), and how long you plan to leave your computer running, etc. It is noted that these calculators use conservative estimates, so use the outputted wattage as a baseline of how much you need. Here are the calculators (thanks, VitaminDeity).
Pick ONE calculator to use and use the recommended wattage, NOT recommended product, as a baseline of what wattage you need for your build. Note that Cooler Master and Seasonic use the exact calculator as Outervision. For more details about wattage, here are some reference videos:

Modularity

You might also see some info about modularity (non-modular, semi-modular, or fully-modular). These describe if the cables will come connected to the PSU or can be separated of your own choosing. Non-modular PSUs have ALL of the cable connections attached to the PSU with no option to remove unneeded cables. Semi-modular PSUs have separate cables for HDDs/SSDs and PCI-e connectors, but will have CPU and mobo cables attached. Modular PSUs have all of their cables separate from each other, allowing you to fully control over cable management. It is noted that with decent cooling and airflow in your case, cable management has little effect on your temperatures (https://www.youtube.com/watch?v=YDCMMf-_ASE).

80+ Efficiency Ratings

As for ratings (80+, 80+ bronze, 80+ gold, 80+ platinum), these are the efficiencies of your PSU. Please see here for more information. If you look purely on electricity costs, the 80+ gold PSUs will be more expensive than 80+ bronze PSUs for the average Canadian user until a breakeven point of 6 years (assuming 8 hours/day usage), but often the better performance, longer warranty periods, durable build quality, and extra features like fanless cooling is worth the extra premium. In general, the rule of thumb is 80+ bronze for entry-level office PCs and 80+ gold for mid-tier or higher gaming/workstation builds. If the price difference between a 80+ bronze PSU and 80+ gold PSU is < 20%, get the 80+ gold PSU!

Warranties

Warranties should also be looked at when shopping for PSUs. In general, longer warranties also have better PSU build quality. In general, for 80+ bronze and gold PSU units from reputable brands:
Any discrepancies are based on varied wattages (i.e., higher wattages have longer warranties) or updated warranty periods. Please refer to the specific product's warranty page for the correct information. For EVGA PSUs, see here (https://www.evga.com/support/warranty/power-supplies/). For Seasonic PSUs, see here (https://seasonic.com/support#period). For Corsair PSUs, see here (https://www.corsair.com/ca/en/warranty).
For all other PSU inquiries, look up the following review sites for the PSUs you're interested in buying:
These guys are engineering experts who take apart PSUs, analyze the quality of each product, and provide an evaluation of the product. Another great website is http://www.orionpsudb.com/, which shows which PSUs are manufactured by different OEMs.

Operating System (OS)

Windows 10

The most common OS. You can download the ISO here (https://www.microsoft.com/en-ca/software-download/windows10). For instructions on how to install the ISO from a USB drive, see here (https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/install-windows-from-a-usb-flash-drive) or watch a video here (https://www.youtube.com/watch?v=gLfnuE1unS8). For most users, go with the 64-bit version.
If you purchase a Windows 10 retail key (i.e., you buy it from a retailer or from Microsoft directly), keep in mind that you are able to transfer it between builds. So if you're building another PC for the 2nd, 3rd, etc. time, you can reuse the key for those builds PROVIDED that you deactivate your key before installing it on your new PC. These keys are ~$120.
However, if you have an OEM key (e.g., pre-builts), that key is tied specifically to your mobo. If you ever decide to upgrade your mobo on that pre-built PC, you might have to buy a new Windows 10 license. For more information, see this post (https://www.techadvisor.co.uk/feature/windows/windows-10-oem-or-retail-3665849/). The cheaper Windows 10 keys you can find on Kinguin are OEM keys; activating and deactivating these keys may require phoning an automated Microsoft activation line. Most of these keys are legitimate and cost ~$35, although Microsoft does not intend for home users to obtain this version of it. Buyer beware.
The last type of key is a volume licensing key. They are licensed in large volumes to corporate or commercial usage. You can find lots of these keys on eBay for ~$10, but if the IT department who manages these keys audit who is using these keys or if the number of activations have exceeded the number allotted on that one key, Microsoft could block that key and invalidate your license. Buyer beware.
For more information on differentiating between all three types of keys, see this page (https://www.tenforums.com/tutorials/49586-determine-if-windows-license-type-oem-retail-volume.html).
If money is tight, you can get Windows 10 from Microsoft and use a trial version of it indefinitely. However, there will be a watermark in the bottom-right of your screen until you activate your Windows key.

MacOS

If you're interested in using MacOS, look into Hackintosh builds. This will allow you to run MacOS to run on PC parts, saving you lots of money. These builds are pretty picky about part compatibility, so you might run into some headaches trying to go through with this. For more information, see the following links:

Linux

If you're interested in a free open-source OS, see the following links:
For more information, go to /linux, /linuxquestions, and /linux4noobs.

Peripherals

Monitors

Keyboards and Mice

Overall

Please note that the cost-performance builds will change daily because PC part prices change often! Some builds will have excellent cost-performance one day and then have terrible cost-performance the next. If you want to optimize cost-performance, it is your responsibility to do this if you go down this route!
Also, DO NOT PM me with PC build requests! It is in your best interests to make your own topic so you can get multiple suggestions and input from the community rather than just my own. Thanks again.

Sample Builds

Here are some sample builds that are reliable, but may not be cost-optimized builds. These builds were created on September 9, 2018; feel free to "edit this part list" and create your own builds.

Links

Helpful links to common problems below:

Contributors

Thanks to:

Housekeeping

2019/09/22
2019/09/18
Updates:
2019/09/09
Updates:
Sorry for the lack of updates. I recently got a new job where I work 12 hours/day for 7 days at a time out of the city. What little spare time I have is spent on grad school and the gym instead of gaming. So I've been pretty behind on the news and some might not be up-to-date as my standards would have been with less commitments. If I've made any mistakes, please understand it might take a while for me to correct them. Thank you!
submitted by BlackRiot to bapccanada [link] [comments]

some thoughts on litecoin mining hardware

I'm slowly learning about bitcoin and litecoin. I've thought a little lately about litecoin mining hardware. This is my analysis. I've only had tangential exposure to hardware design, so my estimates or assumptions might be off. Feedback welcome!
The scrypt litecoin hash function is dominated by an operation called a salsa: it runs 2048 salsas for each hash, and each salsa involves reading/writing a 128B block from a 128KB scratch buffer. The requirement to have a 128KB buffer for each running hash is what makes scrypt difficult to accelerate. The 128B blocks are written successively in the first phase of 1024 salsas (the output of each salsa), and then read randomly in the second phase of 1024 salsas.
I thought about implementing the salsa on a Xilinx FPGA. I implemented a few salsa building blocks to get an idea on timing. The Xilinx chips have 2KB distributed blocks of RAM, but there isn't nearly enough on-chip memory to support many concurrent hashes. One idea is to store every 64 salsa output in the first phase, and then recompute intermediate salsas as needed. This means you need to do an expected 32 extra salsas for each salsa in the second phase.
Based on my experiments, it seemed like a 32 clock (latency) salsa running at 200+MHz is possible (or better, but this seems like the right order of magnitude) on an Artix-6 which costs about $300. The Artix-6 has 730 2KB buffers. Thus, I estimate:
730 (number of concurrent hashes) * (200M (clock frequency) / (1024 (salsas per phase) * (1 + 33) (expected computed salsas per salsa) * 32 (clock cycles per computed salsa))) = 130.6KH/s 
This gives 0.44KH/$. A 7970 card gets 1.75KH/$. We're off by a factor of 4 in price/performance. This design might work in an ASIC. In a custom design, you can tune the trade-off between memory and computation, and probably improve the speed estimates above. I'm still trying to estimate the cost of such an ASIC design, but I'm a little out of my depth.
I started to wonder why a 7970 gets such awesome price/performance. The other options is to put the 128KB blocks in DRAM. You don't need that much memory: 1GB gives you space for 8K concurrent hashes. But now you need high bandwidth to feed the salsa units. Each salsa reads 128B. A 7970 has a 260GB/s GDDR5 memory interface. That's
260GB (memory bandwidth) / (2048 (salsas/hash) * 128 (memory/salsa)) = 992KH/s 
Actual reported rates are around 700KH/s. I think that is because of the random access patterns in the second phase of salsas. That's about 1.75KH/$.
So the other option would be an ASIC with the salsa units and a GDDR5 memory interface like a 7970 board. I estimate (from octopart.com) the cost of the 3GB of DRAM on a 7970 card is about $60. Let's say the ASIC is $20 (about the cost of the bitcoin ASICs, but it might be wildly inaccurate for a chip and package that can support a 384 bit GDDR5 memory interface). Then we get 8.75KH/$, or about 5x the GPUs.
Unfortunately, GDDR5 is a bleeding edge memory standard. An FPGA couldn't possibly manage that level of performance at this point. Designing a GDDR5 board and memory controller would probably be extremely difficult.
You could ask, what is the fatest DRAM interface supported by an FPGA? The Spartan-6 (approx $90 and up) can support a 64-bit DDR2 PC-800 interface. That's 1.6GB/s, so
1.6GB/s (bandwidth) / (2048 * 128) = 6KH/s. 
A DDR2 PC-800 DIMM is about $14. That's a pathetic 0.06KH/$. You can manage 1066 or 1333 in a faster part, but that doesn't help price/performance.
tl;dr: Trading memory for recompute puts FPGAs about 4x behind GPUs for price/performance in rough estimate. Same idea for ASIC is worth a closer look. GPUs are surprisingly efficient for scrypt! ASIC+GDDR5 memory is competitve, but design is out of reach for mere mortals.
edit: formatting.
submitted by mian2zi3 to litecoinmining [link] [comments]

James G. Philips : We don't need blocksize hard limit

Ok, I understand at least some of the reason that blocks have to be kept to a certain size. I get that blocks which are too big will be hard to propagate by relays. Miners will have more trouble uploading the large blocks to the network once they've found a hash. We need block size constraints to create a fee economy for the miners.
But these all sound to me like issues that affect some, but not others. So it seems to me like it ought to be a configurable setting. We've already witnessed with last week's stress test that most miners aren't even creating 1MB blocks but are still using the software defaults of 730k. If there are configurable limits, why does there have to be a hard limit? Can't miners just use the configurable limit to decide what size blocks they can afford to and are thus willing to create? They could just as easily use that to create a fee economy. If the miners with the most hashpower are not willing to mine blocks larger than 1 or 2 megs, then they are able to slow down confirmations of transactions. It may take several blocks before a miner willing to include a particular transaction finds a block. This would actually force miners to compete with each other and find a block size naturally instead of having it forced on them by the protocol. Relays would be able to participate in that process by restricting the miners ability to propagate large blocks. You know, like what happens in a FREE MARKET economy, without burdensome regulation which can be manipulated through politics? Isn't that what's really happening right now? Different political factions with different agendas are fighting over how best to regulate the Bitcoin protocol.
I know the limit was originally put in place to prevent spamming. But that was when we were mining with CPUs and just beginning to see the occasional GPU which could take control over the network and maliciously spam large blocks. But with ASIC mining now catching up to Moore's Law, that's not really an issue anymore. No one malicious entity can really just take over the network now without spending more money than it's worth -- and that's just going to get truer with time as hashpower continues to grow. And it's not like the hard limit really does anything anymore to prevent spamming. If a spammer wants to create thousands or millions of transactions, a hard limit on the block size isn't going to stop him.. He'll just fill up the mempool or UTXO database instead of someone's block database.. And block storage media is generally the cheapest storage.. I mean they could be written to tape and be just as valid as if they're stored in DRAM. Combine that with pruning, and block storage costs are almost a non-issue for anyone who isn't running an archival node.
And can't relay nodes just configure a limit on the size of blocks they will relay? Sure they'd still need to download a big block occasionally, but that's not really that big a deal, and they're under no obligation to propagate it.. Even if it's a 2GB block, it'll get downloaded eventually. It's only if it gets to the point where the average home connection is too slow to keep up with the transaction & block flow that there's any real issue there, and that would happen regardless of how big the blocks are. I personally would much prefer to see hardware limits act as the bottleneck than to introduce an artificial bottleneck into the protocol that has to be adjusted regularly. The software and protocol are TECHNICALLY capable of scaling to handle the world's entire transaction set. The real issue with scaling to this size is limitations on hardware, which are regulated by Moore's Law. Why do we need arbitrary soft limits? Why can't we allow Bitcoin to grow naturally within the ever increasing limits of our hardware? Is it because nobody will ever need more than 640k of RAM?
Am I missing something here? Is there some big reason that I'm overlooking why there has to be some hard-coded limit on the block size that affects the entire network and creates ongoing issues in the future?
  1. To Maintaining Consensus
There has to be clearly defined rules about which blocks are valid and which are not for the network to agree. Obviously no node will accept a block that is 10 million terabytes, it would be near impossible to download even if it were valid. So where do you set the limit? And what if one nodes sets their limit differently than other nodes on the network? If this were to happen, the network would no longer be in consensus about which blocks were valid when a block was broadcasted that met some nodes' size limits and did not meet others. Setting a network limit on the maximum block size ensures that everyone is in agreement about which blocks are valid and which are not, so that consensus is achieved.
It is as impossible to upload a 10 million terabyte block as it is to download it. But even on a more realistic scale, of say a 2GB block, there are other factors that prevent a rogue miner from being able to flood the network using large blocks -- such as the ability to get that block propagated before it can be orphaned. A simple solution to these large blocks is for relays to set configurable limits on the size of blocks that they will relay. If the rogue miner can't get his megablock propagated before it is orphaned, his attack will not succeed. It doesn't make the block invalid, just useless as a DoS tool. And over time, relays can raise the limits they set on block sizes they will propagate according to what they can handle. As more and more relays accept larger and larger blocks, the true maximum block size can grow naturally and not require a hard fork.
  1. To Avoid (further) Centralization of Pools
Suppose we remove the 1 MB cap entirely. A large pool says to itself, "I wish I had a larger percentage of the network hashrate so I could make more profit."
Then they realize that since there's no block size limit, they can make a block that is 4 GB large by filling it with nonsense. They and a few other pools have bandwidth large enough to download a block of this size in a reasonable time, but a smaller pool does not. The tiny pool is then stuck trying to download a block that is too large, and continuing to mine on their previous block until they finish downloading the new block. This means the small pool is now wasting their time mining blocks that are likely never to be accepted even if they were solved, since they wouldn't be in the 'longest' chain. Since their hash power is wasted, the original pool operator now has effectively forced smaller pools out of the network, and simultaneously increased their percentage of the network hashrate.
Yet another issue that can be addressed by allowing relays to restrict propagation. Relays are just as impacted by large blocks filled with nonsense as small miners. If a relay downloads a block and sees that it's full of junk or comes from a miner notorious for producing bad blocks, he can refuse to relay it. If a bad block doesn't propagate, it can't hurt anyone. Large miners also typically have to use static IPs. Anonymizing networks like TOR aren't geared towards handling that type of traffic. They can't afford to have the reputation of the IPs they release blocks with tarnished, so why would they risk getting blacklisted by relays?
  1. To Make Full Nodes Feasible
Essentially, larger blocks means fewer people that can download and verify the chain, which results fewer people willing to run full nodes and store all of the blockchain data.
If there were no block size limit, malicious persons could artificially bloat the block with nonsense and increase the server costs for everyone running a full node, in addition to making it infeasible for people with just home computers to even keep up with the network. The goal is to find a block size limit with the right tradeoff between resource restrictions (so that someone on their home computer can still run a full node), and functional requirements (being able to process X number of transactions per second). Eventually, transactions will likely be done off-chain using micropayment channels, but no such solution currently exists.
This same attack could be achieved simply by sending lots of spam transactions and bloating the UTXO database or the mempool. In fact, given that block storage is substantially cheaper than UTXO/mempool storage, I'd be far more concerned with that type of attack. And this particular attack vector has already been largely mitigated by pruning and could be further mitigated by allowing relays to decide which blocks they propagate.

James G. Phillips IV
We don't need blocksize hard limit, 20MB+40% per year is already a big compromise. I'm afraid we'll still hit the wall in the future.
submitted by finway to Bitcoin [link] [comments]

How To Mine 1 Bitcoin in 10 Minutes - Blockchain BTC Miner ... Free download Generator Bitcoin Hack New version Crypto coin 2020 basic income token price 1.32$ sighup bonus 100bits per ... Inside a Bitcoin mine that earns $70K a day - YouTube HOW TO BUY BITCOIN 2020 - BEST Ways to Invest In ...

Bitcoin’s price is $13,112.09 BTC/USD exchange rate today. The real-time BTC market cap of $242.7 Billion currently ranks #1 with a chart dominance at 62.37%, daily trading volume of $4.65 Billion and live coin value change of BTC-0.29 in the last 24 hours.. Bitcoin Price: Live BTC/USD Charts, History Analysis Updates and Real-Time Coin Market Value Data Name Symbol Market Cap Algorithm Hash Rate 1h Attack Cost NiceHash-able; Bitcoin: BTC: $242.13 B: SHA-256: 123,758 PH/s: $483,191: 0%: Ethereum: ETH: $46.16 B: Ethash ... Digital money that’s instant, private, and free from bank fees. Download our official wallet app and start using Bitcoin today. Read news, start mining, and buy BTC or BCH. The war for control over Bitcoin's future took a bizarre turn yesterday when Greg Maxwell, ... instead of sorting them by fee-per-byte. I do not have the time to perform a thorough study, but I manually examined a randomly picked block mined by Antfarm a few days ago. As you can also examine for yourself, the transactions are ordered essentially by fee-per-byte, which is not what we would see ... BTC/EUR: Aktueller Bitcoin - Euro Kurs heute mit Chart, historischen Kursen und Nachrichten. Wechselkurs BTC in EUR.

[index] [36538] [25553] [31628] [27866] [165] [30592] [27412] [34841] [11547] [29043]

How To Mine 1 Bitcoin in 10 Minutes - Blockchain BTC Miner ...

The virtual goldrush to mine Bitcoin and other cryptocurrencies leads us to Central Washington state where a Bitcoin mine generates roughly $70,000 a day min... A month ago, Peter bought a Lamborghini Huracan using Bitcoin that he bought in 2011 for $115. At that time, Bitcoin was at $5600. Today it is over $8000! It... #bitcoin price usd #current bitcoin price #bitcoin wallet #bitcoin stock #bitcoin price chart #bitcoin usd #bitcoin chart #1 btc to usd #how much is bitcoin worth #bitcoin price live #bitcoin ... Bitcoin Cost & Price Bitcoin Today & Bitcoin Money & Bitcoin Value in Dollars & Bitcoin Account. bitcoin companies bitcoin conversion rate bitcoin cost bitco... basic income token price 1.32$ sighup bonus 100bits per refer 500bits Bitcoin Earn Join US https://bit.ly/2Sp1eXg ️GramFree Telegrm Token Price 2.5$ ️ ️J...

#