keyboard on fire

Discussion – Can software damage hardware?

As the present day’s cyber technology innovation is improving, the threat landscape widens as well.

Cyber criminals are constantly finding ingenious ways to tunnel into businesses, schools, government offices, computer users, and IT systems, disrupting services and daily operations.

Several years ago, hackers and malware have shut down nuclear centrifuges in Iran and severely damaged an unnamed steel mill in Germany.

Knowing the advancements we have today, could such incidences be possible enough?

As headlined on an article from a fictional Weekly World News in 2000, reporting that “hackers can now turn your home computer into a bomb and blow your family to smithereens, and do so remotely from thousands of miles away,” it’s quite impossible to happen in reality.

newspaper featuring hackers

Source: WIRED

It’s a yes and a no, theoretically and hypothetically, in fact, such conditions like this are somewhat unrealistic in most machines today.

Considering your definition of damage, it might be possible in some EXTREMES:

  • Making your CPU usage a 100% (over its maximum temperature);
  • Turning your speakers volume to max into then enjoy a lot of your favourite playlists over a few weeks, non-stop;
  • Writing on the USB drive a few hundred million times, making it unusable then; or
  • Researching the drive geometry and issuing the worst possible commands with your HDD Drives. Thus, making it hot very soon and shortening its lifespan.

Obviously, you need a lot of time to damage your computer.

Tl; dr;

Upon answering the question about the software harming your hardware, it’s a battle between a YES and a NO.

As all hardware is driven by software, it is technically possible to destroy your hardware or at least make it inoperable by messing with software. Think about overclocking, wherein, you’re doing changing settings in the BIOS, allowing low access to low level settings. Mess up and voila! Your hardware is gone. Also, there are viruses that could potentially take advantage and would possibly disrupt and destroy your hardware since the system can be modified.

No, in the sense that, software is nothing but a flow of electricity that’s why it cannot damage hardware by itself.

Lastly, a software can damage another software in the hardware that makes it work, but this doesn’t essentially break the hardware.

Is it TRUE that software can damage your hardware? How would that even be possible?

hardware and software


There’s quite a few instances wherein you can definitely say that software can “potentially” harm your hardware.

In theory, numerous incidences might be:

A BIOS Flash.Some motherboards allow you to flash (modify) the BIOS via software from within the OS. Eventually, this opens a backdoor for malware to flash the BIOS to something that will damage the processor.

Overclocking tools. Another one, some motherboards provide overclocking tools, allowing you to change CPU settings from within the OS. Same as the first situation, once a virus takes over that, putting your CPU’s settings to the extreme then BOOM!

Stress-tests and intensive applications. Over-exerting your CPU to its limits could spike temperatures, which can eventually damage it. Of course, the fan in your computers help cool the CPU down, and most CPUs are designed to shut off when they reach a dangerously high temperatures (thermal shutdown).

In such cases, if an attack is possible even in theory, CrowdStrike, a security vendor says YES.

“We can actually set the machine on fire,” said Dmitri Alperovitch, CrowdStrike CTO in a statement.


The exploit requires completely replacing the Mac’s firmware, which controls every aspect of the hardware.

This means that you need to offer a fake Mac firmware update, convincing the user to install it.


A software can potentially damage a hardware unless it’s malicious but for legit software it’s too impossible to happen.

For the German steel mill incident, someone found a way to control the plant’s equipment, preventing people on site from shutting down a blast furnace.

As with Stuxnet, the one about nuclear centrifuges, it didn’t destroy computers – it exploited computers attached to centrifuges, and destroyed them.

It is unlikely to happen that viruses will destroy your computer, but hackers might someday go after the objects connected to it.

By Tuan Nguyen

australian finance

Discussion – What is Luxury Car Tax?

Luxury Car Tax or simply LCT is a tax collected by the Australian Taxation Office (ATO) on cars with a GST-inclusive value above the LCT threshold.

The tax is paid by businesses that sell or import luxury cars (dealers), and also by individuals who import luxury cars, imposed at the rate of 33% on the amount above the luxury car threshold.

Last July 1, 2019, the ATO revised the Luxury Car Tax (LCT) threshold.

The new LCT threshold for the 2019-20 financial year is $67,525 (higher from $66,331 that applied for the 2018-19 financial year). For fuel-efficient vehicles, the LCT threshold for the 2019-20 financial year remains the same at $75,526.

See table below:

LCT threshold

Source: ATO

The indexation factor for the 2019–20 financial year for:

  • fuel-efficient vehicles is 0.987
  • other vehicles is 1.018.

LCT was first introduced by the Howard Administration in the year 2000 alongside the GST and has added more than $5 billion to the government’s coffers in the past 10 years.

The reason why the tax was introduced that’s because as the administration introduced GST in 2000, the broad-based consumption tax replaced a range of other taxes and levies, leading to price reductions for new cars right across the board.

Tl; dr;

cash and car

Source: Car sales Australia

With Luxury Car Tax (LCT), if you will a buy a car with a value that is above the threshold, you’ll pay an extra third on the amount that’s above the threshold. This tax isn’t listed anywhere, because it’s already built into the manufacturer’s retail price.

The threshold is indexed, going up a bit every financial year. For financial year 2019-20, it is now $67,525.

Interestingly, while the LCT threshold for gas guzzlers has risen by almost $10,000 since 2010, the threshold for fuel efficient vehicles has only gone up by $526 (about 0.7%) and the value for all others has gone up $10,345 (over 18%).

How LCT is deducted?

luxury cars

Source: 100 all’ora

Cars retailing at a price higher than $67,525 will incur the LCT, charged at a rate of 33% for the component of the price exceeding the threshold.

That means, a car costing at $70,000 retail would incur tax payable for the excess (the difference between the price and the threshold). That excess ($70,000-$67,525) is $2,475.

The big BUT there is that the Goods and Services Tax (GST) charged for that difference has to be deducted from the GST-inclusive price before calculating the LCT. Otherwise, the consumer will be paying tax on a tax.

Deducting the 10% GST from the excess leaves a taxable sum of $2,250. Dividing by three (or multiplying by 33 per cent – it’s the same thing) results in a charge of $742.50.

That takes the total price of the car to $70,742.50, which the buyer pays. The dealer receives the money from the customer in exchange for the car and forwards the tax amount payable on the car to the federal government.

Does it apply to every vehicle sold in Australia?

LCT doesn’t necessarily apply to all vehicles sold in Australia.

As stated by the ATO, cars manufactured or imported more than 2 years ago, for example, and emergency vehicles are exempted.

Motor homes, camper vans and other goods-carrying vehicles that carry more passengers than goods are exempted as well.

“Green” cars using fuel at a rate of 7.0L/100km or less are also free of LCT up to a ceiling figure.

Basically, any car consuming fuel at 7.0L/100km or lower and priced (currently) above $66,331 and below $75,526 will not incur the tax at all.


Australia is expected to call it quits for the LCT, having negotiations of a free-trade agreement with European Union (EU) to accommodate European prestigious brands such as those from Germany, who insist that the tax actively deters buyers from purchasing their products.

A possible effect to this reform would be the government’s loss of revenue from collecting LCT from each luxury car purchase. That’s why a proposed solution will be promulgated to phase the tax out gradually in order to minimize the impact of lost revenue.

By Tuan Nguyen

australian finance

Discussion – What is GST?

Also known as the “Goods and Services Tax,” GST as Investopedia would define it is a “value-added tax levied on most goods and services sold for domestic consumption.”

The tax is ultimately paid by consumers, which is then remitted to the government by the businesses selling the goods and services.

As a result, GST provides revenue for the government.

In Australia, GST is a broad-based tax of 10% on most goods, services and other items sold or consumed in the country.

GST applies to most businesses in Australia, collecting it for the government whenever they sell products and services and pay this revenue to the Australian Taxation Office (ATO).

In return, the government distributes this money to its states and territories to finance public services and infrastructure, such as hospitals, roads and public schools.

Most basic foods, some education courses, and some medical, health and care products are exempt from GST.

GST was introduced on July 1, 2000, replacing a range of existing state and federal taxes, duties and levies.

Tl; dr;

GST or Goods and Services Tax is an indirect tax payable by the suppliers of certain goods and services.

Business owners registered to pay GST initially need to assess whether their goods and services are taxable, because they cannot charge their consumers this type of tax if their goods and services are GST-free or input-taxed.

They can charge their consumers GST by adding a 10% flat fee on top of the price they charge for their goods and services.

GST: How it works?

how GST works

Source: ATO

The GST system, as how the Commonwealth of Australia executes it, “operates at each step of the manufacturing, wholesale and retail process, with each participant ultimately collecting a portion of the total GST during their specific transaction and remitting it to the ATO. Total GST collected is made available to the Federal Government to apportion amongst the States and Territories in line with specific agreements.”

 Use the GST calculator on the MoneySmart website to work out how much GST to include in your prices.

Who needs to pay this tax?

GST is payable by the suppliers of certain goods and services. You will need to register for GST if:

  • Your business or enterprise has a GST turnover of AU$75,000 or more.
  • You’re a non-profit organization and have a business turnover of at least $150,000 per year or more.
  • You provide taxi travel or limousine travel services (including ride-sourcing) as part of your business, regardless of your GST turnover. (This rule also applies to both taxi owner drivers and people who just rent a taxi.)
  • You want to claim fuel tax credits for your business or enterprise.

Note: It is optional to register for GST if you don’t fit into one of these categories. However, if you choose to register, you must stay registered for at least 12 months.

For those overseas businesses selling imported services or digital products to Australian consumers and make over AU$75,000. Better consider registering for GST.

This includes:

  • Digital products (i.e. streaming or downloading of movies, apps, games and e-books).
  • Imported services such as architectural or legal services.

As of July 1, 2018, there’s a need to register for GST if you’re an overseas business making over AU$75,000 and sell low value imported goods to Australian consumers. This will affect goods valued at AU$1000 or less on items like:

  • Clothing;
  • Cosmetics;
  • Books; and
  • Electrical appliances.

How to register for GST?

If you qualify to one of the above criteria, you are required to register for GST.

 Step 1:

To register for GST, you will need an Australian Business Number (ABN).

Once you have an ABN, you can register for GST:

Step 2: Once your business is registered for GST, you are required to pay GST on all goods and services you provide unless they are GST-free or input-taxed.

You can pass on the cost of GST by:

Note: Businesses that charge GST must send the GST amount to the Australian Taxation Office (ATO). You may be required to transfer the GST amount to the ATO monthly, quarterly or annually depending on your business’s GST turnover.

If you have customer overseas, you do not have to charge your customers GST because export products are generally GST-free. However, you can still claim for a refund for the GST you paid for any input materials purchased.Nonetheless, you will still need to pay for GST in some circumstances.

Step 3: After the end of each business quarter, you need to complete a Business Activity Statement (BAS) and lodge it with the ATO.

In your BAS, you need to report the amount you have collected from your consumers as GST and pay the equivalent amount to the ATO.

Note: You can lodge your BAS online, by mail or over the phone (if you have a nil lodge), and pay online, by mail or in person at an Australia Post outlet.


For more inquiries regarding GST, you can visit the ATO website about GST and excise concessions for small businesses.

For checklists on assessing your GST compliance and risk management processes, download the ATO’s GST Governance and Risk Management Guide.

Learn more about registering for GST, including helpful videos, on the ATO website.

For more personal advice about your business and tax, consult your accountant.

By Tuan Nguyen

Discussion – Peer to peer lending platforms

Peer-to-peer (P2P) lending platforms is now growing in popularity. They are online investing platforms that match people or companies looking to lend and borrow allowing them to make direct arrangements between one another.

Providing intermediary services, these platforms perform the relevant due diligence risk assessments and credit checks that often include with fee charges for their services but are not part of the final lending agreement.

Some notable P2P lending platforms in Australia are RateSetters, MoneyPlace, SocietyOne, etc.

Tl; dr;

P2P lending system websites connect borrowers directly to investors (lenders), setting the rates, terms, conditions and enables the transaction. People use P2P lending services for a variety of reasons, such as for auto loans, home renovations, debt consolidation, small business loans, and many more.

These platforms are far better than what they can get from credit cards or personal loans from traditional banks.

The system and benefits of P2P lending platforms

peer to peer lending principle

Source: Finextra

A form of direct lending of money to individuals or businesses, peer-to-peer (P2P) lending also called person-to-person lending or social lending has no official financial institution involved in participating as an intermediary in the deal, matching lenders (investors) with potential borrowers.This lending system is generally done through online platform that match lenders with potential borrowers.

One benefit about P2P lending is that borrowers can get loan offers from lending websites at better interest rates compare to traditional banks with overhead costs since peer-to-peer lending companies generally operate online.

Investors get to see and select exactly which loans they want to fund.

Offering both secured and unsecured loans, most of the loans in P2P lending are unsecured personal loans. Secured loans are rare for the industry and are usually backed with luxury goods.

The following are the general steps in P2P lending process:

  • First, a potential borrower obtaining a loan needs to complete an online application on the peer-to-peer lending website.
  • Second, the lending platform assesses the application and determines the risk and credit rating of the applicant. The applicant is eventually assigned with the appropriate interest rates.
  • Third, once the application is approved, the applicant receives the available options from the investors based on his/her credit rating and assigned interest rates.
  • Fourth, the applicant can now evaluate the suggested options available and choosing one of them.

Lastly, the applicant is deemed responsible to pay periodic (typically monthly) interest payments and repaying the principal amount once it reached maturity.

Benefits and fallbacks

Some benefits of P2P lending are:

  • Higher returns to the investors relative to other types of investments.
  • For both lenders and borrowers,P2P platforms are more agile, efficient and transparent to deal with than traditional banking institutions.
  • Great option to diversify the investment portfolio of the investors much less likely to be affected by economic turbulence.
  • More accessible source of funding for some borrowers than conventional loans provided by financial institutions.
  • P2P loans have lower interest rates for borrowers due to competition between lender with lower origination fees.
  • P2P platforms offer high level of security for lender’s investment.
  • Lenders can withdraw their funds anytime.
  • P2P platforms offer greater chance of making an ethical investment.

Risks can be:

  • A lender should be aware of the default probability of his/her counter party, given the fact that most borrowers who apply for P2P loans possess low credit ratings that do not allow them to obtain a conventional loan from a bank.
  • Government protection isn’t included for lenders in case of the borrower’s default.
  • Peer-to-peer lending may not be available to some borrowers or lenders.


Peer-to-peer lending is a unique platform for borrowers and lenders, providing both with alternative options to traditional banks and building societies. Though it has its strengths and weakness, P2P lending is gaining traction and seems certain to become more popular in several countries such as China, Japan, Italy, and the Netherlands.

By Tuan Nguyen

microsoft vs google

Discussion – Google Spreadsheet vs Microsoft Excel

Spreadsheets are invaluable tools in creating budgets, producing graphs and charts and for storing and sorting data. They can vary in complexity and can be used for various reasons. May it be business data storage, accounting and calculation, budgeting and spending, assisting with data exports, and data sifting and cleanup, and accomplishing business administrative task.

Best examples of spreadsheet software are Google Sheets and Microsoft Excel. These two allow complex mathematical calculations and data analysis, has auto-save feature, and is also compatible with Android, iOS, Windows and Mac OS X.

Although many businesses rely on Microsoft Excel as their go-to application for spreadsheets, Google Sheets innovatively gives companies the option to figure out budgets, client contacts, and more.

Tl; dr;

Google Sheets and Microsoft Excel are two of the most popular spreadsheet platforms used by many small business owners and freelance enthusiasts anytime, anywhere.

They both have similar functionality, such as advanced conditional formatting, tracking dependencies, as well as existing robust graphs and picture creation options.

Let’s take the time to discuss their core differences.

Key Differences: Google Spreadsheet vs Microsoft Excel

excel spreadsheet

Source: Lifehack

Years back, Microsoft Excel has been a constant partner for businesses but as software migrates to cloud, Google’s innovative spreadsheet software emerged as a worthy opponent.

For now, care to think about these questions.

What are the main differences between the two? What key areas do each of these software has greater edge? What’s best to use?


Although more and more users are trying out Google Sheets as a spreadsheet tool, Microsoft Excel has an entrenched user base that’s comfortable with Excel. Thus, it still takes a considerable amount of time for user to eventually switch over to a new app and be familiar with it.


Google Sheets makes work easier, integrating you into any workflow. It also gives you the ability to share with just about anyone while limiting their access and/or control.

On the other hand, Microsoft Excel allows sharing and collaboration but limited to what Google Sheet does. You’re restricted to sharing files via email, haven’t been getting the same level of collaboration that Sheets does.

But if you are using Office 365, you can get access to a similar tracked edit page and similar options for seeing activity from other users. But for now, Google Sheets prevails.

Functions and formulas.

Excel has far larger functions and formulas whilst, some Sheets’ formulas are still missing.

Offline access.

Microsoft Excel can still work perfectly offline and can automatically set up your files sync via OneDrive as soon as you regain internet access.

However, even if Google Sheets has offline access available, you’d still have the difficulty accessing files you previously created whilst online, pushing you to install an offline extension to be able to work on files extensively offline but extensions can misbehave at all times.

Handling larger budget files.

In terms of handling transaction records, tabs, and fancy calculations and graphs, Microsoft Excel can handle up to 17,179,869,184 cells.

Although Google Sheets might be fine and dandy for your budget, it can only handle up to 5,000,000 cells.


When using a lot of functions, Microsoft Excel has a handy Quick Access toolbar where you can pin buttons for relatively quick and easy workflow. Whereas, Google Sheets have no such feature.

Cloud and Syncing.

Google Sheets was built from the ground up to be a cloud based alternative to Microsoft Excel, making everything accessible from your Google account, and also see and access all your files from Google Drive.

With Excel on Office 2019 or earlier requires a bit of setting up, so there’s a need for you to use Office 365 to get the same level of instant synchronization between devices.


Microsoft Excel requires one-time payment of Microsoft Office or a subscription to Office 365, while Google Sheets is completely free to use: no annual fee, no monthly fee, and no user free. Plus, if you have a Google account, you can already access to it directly.


Though Google Sheets and Microsoft Excel has pros and cons, both are still catching with each other in terms of technology, accessibility and compatibility making the gap between them getting smaller and smaller, slowly turning the two platforms to be quite similar.

By Tuan Nguyen

compact discs

Technology review – 3D Optical Data Storage

Optical data storage as what Britannica would define it is an “electronic storage medium that uses low-power laser beams to record and retrieve digital (binary) data.”

With optical storage technology, it uses lasers to write to, and read from, small discs that contain a light-sensitive layer upon which data can be stored.

optical data storage


Optical storage systems usually consist of a drive unit and a storage medium in a rotating disc form. The discs, in general, are pre-formatted using grooves and lands (tracks), enabling the positioning of an optical pick-up and recording head to access the information on the disc.

Compare to other storage formats, optical data storage discs are small, portable, and do not easily wear out with continual use, making them particularly useful for storing data.

The first method for storing data using light on a hard medium was invented by James T. Russell in the late 1960s, after realizing that the wear and tear on the vinyl records caused by continuous contact from the stylus to the record, can be avoided by using a light to read a music without physically touching the disc. Then on, it had grown to a lot of developments.

Today’s conventional optical storage is currently in 2-dimensional medium,i.e. CD-ROM, DVD, Blu-Ray, etc. While these devices have steadily improved in storage capacity, they are still considered limited knowing the fact that data can be written in layers on the disc.

Now, researchers are trying to develop a new generation storage medium to potentially provide petabyte-level mass storage on DVD-sized discs known as the advent of 3D optical data storage.

Tl; dr;

3D optical data storage is an experimental storage technology predicted to offer exponentially more storage capacity than today’s data storage technologies.

This storage system may use a disc that looks nearly same like a transparent DVD, containing 100 layers of information to be stored on a single disc– each at a different depth in media and each consisting of a DVD-like spiral track.

The average thickness of this upcoming version of disc is predicted to be 5 millimeters. So expect to have thicker discs than today’s ordinary CDs and DVDs.

What is 3D optical data storage?

cross-section of a 3D optical disc (yellow) along a data track (orange marks).

A cross-section of a 3D optical disc (yellow) along a data track (orange marks).

Source: TheBendster (talk) (Uploads)

As Wikipedia would point it, “3D optical data storage is any form of optical data storage in which information can be recorded or read with three-dimensional resolution (as opposed to the two-dimensional resolution afforded, for example, by CD).”

The origins of this storage system dates back to the 1950’s, when Yehuda Hirshberg developed the photochromicspiropyrans and suggested their use in data storage.

In the 1970s, Valeri Barachevskii demonstrated that this photochromism could be produced by two-photon excitation and eventually in the late 1980s, Peter T. Rentzepis showed that this could lead to three-dimensional data storage.

3D optical data storage has the potential to provide petabyte-level (1024 TB) mass storage on DVD-sized discs, using a technology that allows more than 100 layers to be written on a disc that looks like a traditional DVD. Thus, creating an exponentially larger data.

Estimates suggest that 3D optical data storage discs can be able to store 5 terabytes of data or more.

Types of 3D storage

There are two (2) major types of 3D storage:

  • The simple storage of data throughout he volume of the disc; and
  • Holographic storage.

Localized-bit. An extension of standard disc storage, localized-bit allows data to be available not just on the surface of the disc but also, throughout its volume. The laser then reads the data through the medium, rather than across the lands and pits of a traditional CD-ROM or DVD.

Holographic Data Storage. Though the above method is viable, holographic data storage has a much greater storage and retrieval potential – much possible and has better opportunities for implementation.

It is a three-dimensional image created as light beams (e.g., from a laser) merge together. Usually, a laser beam is split into two paths: data and reference beams are directed into the storage medium such as a crystal.

Advantages and disadvantages of 3D optical data storage

Optical storage varies from other data storage techniques that make use of other technologies such as magnetism or semiconductors.

Upon using 3D optical data storage:

  • Optical media can last a long time depending on what kind of optical media you choose but only with proper care.
  • It is great for archiving. Meaning, when data is written to them, they cannot be reused – the data is permanently preserved having no possibility of being overwritten.
  • It is widely used on other platforms (PCs or any other system).
  • Optical media has the capability in pinpointing a particular piece of data stored on it, independent of the other data on the volume or the other in which that data was stored on the volume.

The downsides can be:

  • Due to system’s write-once-read-many (WORM) characteristic, it prevents you from being able to use that media again.
  • The server employs software compression to write compressed data to your optical media taking considerable processing unit sources. Thus, increasing the time needed in writing and restoring that data.


Although several companies are actively developing the technology and claiming that it may become available soon, still no commercial product based on 3D optical data storage has yet arrived due to design issues that need to be addressed.

By Tuan Nguyen

nano robot

Technology review – Femto computing

Now that nanotechnology is well-established, well-applied, and well-funded.In the near future, there will come a time that we will come across femtotechnology.

As what Wikipedia would state: Femtotechnology is a “hypothetical term used in reference to structuring of matter on the scale of a femtometer, which is 10^−15m. This is a smaller scale in comparison to nanotechnology and picotechnology which refer to 10^−9m and 10^−12m respectively.”

Tl; dr;

Femtotechnology is still in the theoretical zone, with no realistic application in the present.

Imagine a future wherein the soles of your shoes will be made up with femto-enhanced rubber making it more resilient to elements. Femto-sized probes and chemicals may one day course through your blood to protect your immune system against deadly viruses and diseases, or enable your smartphone to be as thin and flexible or even integrate it into your body.

These things will be some of the most notable innovations and inventions that the forthcoming femtotechnology could provide for the whole humanity, thanks to the power of femto computing.

Femto computing: Measuring in a subatomic scale

illustration of lights

Source:  Ecstadelic

An Australian AI researcher named Hugo de Garis, wrote a few years ago in Humanity Plus Magazine on the great power that the future technology could bring: “If ever a femtotech comes into being, it will be a trillion trillion times more “performant” than nanotech, for the following obvious reason. In terms of component density, a femtoteched block of nucleons or quarks would be a million cubed times denser than a nanoteched block. Since the femtoteched components are a million times closer to each other than the nanoteched components, signals between them, traveling at the speed of light, would arrive a million times faster. The total performance per second of a unit volume of femtoteched matter would thus be a million times a million times a million = a trillion trillion = 1024.”

Through femtotechnology comes femto computing. It is measuring on the scale of subatomic particles which is three orders magnitude smaller than picotechnology and six orders of magnitude smaller than nanotechnology.

Femto is derived from the Danish word femten meaning “fifteen”, as in, a femtometeror a “fermi” being 10-15 the size of a regular meter.

Ben Goertzel, one of the world’s leading AI researchers, highlighted in his companion article in Humanity Plus Magazine: “What a wonderful example we have here of the potential for an only slightly superhuman AI to blast way past humanity in science and engineering. The human race seems on the verge of understanding particle physics well enough to analyze possible routes to femtotech. If a slightly superhuman AI, with a talent for physics, were to make a few small breakthroughs in computational physics, then it might (for instance) figure out how to make stable femto structures at Earth gravity… resulting in femto computing – and the slightly superhuman AI would then have a computational infrastructure capable of supporting massively superhuman AI. Can you say “singularity”? Of course, femtotech may be totally unnecessary in order for a Vingean singularity to occur (in fact I strongly suspect so). But be that as it may, it’s interesting to think about just how much practical technological innovation might ensue from a relatively minor improvement in our understanding of fundamental physics.”

Being a computer science professor back then and having been able to map computer science concepts to QCD (quantum chromodynamics) phenomena, Hugo de Garis writes, when computing at the femto level, the essential ingredients of (digital) computing are bits and logic gates.

“A bit is a two-state system (e.g., voltage or no voltage, a closed or open switch, etc.) that can be switched from one state to another. It is usual to represent one of these states as “1” and the other as “0,” i.e., as binary digits. A logic gate is a device that can take bits as input and use their states (their 0 or 1 values) to calculate its output.

The three most famous gates, are the NOT gate, the OR gate, and the AND gate. The NOT gate switches a 1 to a 0, and a 0 to a 1. An OR gate outputs a 1 if one or more of its two inputs is a 1, else outputs a 0. An AND gate outputs a 1 only if the first AND second inputs are both 1, else outputs a 0.

There is a famous theorem in theoretical computer science, that says that the set of 3 logic gates {NOT, OR, AND} are “computationally universal,” i.e., using them, you can build any Boolean logic gate to detect any Boolean expression (e.g. (~X & Y) OR (W & Z)).”

So if he can find one to one mapping between these 3 logic gates and phenomena in QCD, he can compute anything in QCD. He would have femtometer-scale computation.

You can read the essay in full here.


In general, the practical applications of femtotechnology are currently considered to be unlikely. It would take a long way for us to realize its usage in real life.

But there are already sources denoting about the possibilities of femto science in the areas of biology through using ultrashort pulse laser technology. With it, biologists can peer into known biological reactions at the shortest time than ever.

For more new and exciting ideas about it, check out Femtochemistry and Femtobiology: Ultrafast Events in Molecular Science by Monique M. Martin and James T. Hynes.

By Tuan Nguyen

quantum computer

Technology review – New Google Quantum Computer

Over the years, the wonders of quantum computers have stirred the curiosity and imagination of the gifted minds in the tech community.

Since then, countless prototypes of these supercomputers have surfaced in the tech market with the hopes of a brighter, progressive, and fast-engaging future.

Last week, a draft research paper revealed that Google have built a quantum computer solving a problem that would take supercomputers 10,000 years.

The leaked paper was erroneously uploaded on the website of NASA’s Ames Research Center in Moffett Field, California before it was pulled down.

According to the Financial Times, where the story first broke out, some of the researchers there are paper authors.Readers have downloaded the manuscript before it vanished, and has already been circulating online including on reddit.

John Martinis, the physicist who leads Google’s quantum computing effort in Santa Barbara, California, remained tight lipped over the said leakage. Others in the field think it is legitimate.

You can read the paper in full right on this link.


Tl; dr;

While our classic computers like laptops, smartphones, and even modern supercomputers claimed to be extraordinarily powerful, the capabilities of these devices will soon to be in question due to an internet leakage.

Google, one of the giant tech companies investing in the quantum computing expedition, is said to have just achieved Quantum Supremacy. It is a milestone where a quantum computer is able to perform a calculation that classical computers can’t practically do.

The paper points only one author – John Martinis.

He is a physicist who leads Google’s quantum computing effort in Santa Barbara, California.

Will Google’s Quantum Supremacy milestone strike a significant implication?

illustration of a computer

Source: Popular Science

While IBM, Microsoft, Intel and as well as numerous others are eyeing at how to advance the quantum computing technology, a new story shrouded with intrigue has surfaced earlier this month

The search giant, Google is said to have achieved Quantum Supremacy through an internet leakage.

As stated on the paper, physicists at Google were said to have used a quantum computer to perform a calculation that would overwhelm the world’s best conventional supercomputer – Summit.

It contains details of a quantum processor called Sycamore that contains 54 superconducting quantum bits, or qubits claiming to have achieved quantum supremacy.

“This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper says.

The paper calculates the task would have taken Summit, the world’s best supercomputer, 10,000 years – but Sycamore did it in just 3 minutes and 20 seconds.

“It’s a great scientific achievement,” says physicist Chad Rigetti, the founder and CEO of Rigetti Computing in Berkeley and Fremont, California, which is also developing its own quantum computers.

While, Greg Kuperberg, a mathematician at the University of California, Davis, calls the advancement, “a big step toward kicking away any plausible argument that making a quantum computer is impossible.”

Google’s quantum computer a task called a random circuit sampling problem. With such problem, after a series of calculations each qubit outputs a 1 or 0. The aim is to calculate the probability of each possible outcome occurring.

Based on the tech giant’s findings, Sycamore was able to find the answer in just a few minutes – a task estimatedto take 10,000 years on the most powerful supercomputer, Summit.

Google’s quantum computer, called Sycamore, consisted of only 54 qubits – one of which didn’t work. For quantum computers to really come into their own, they will probably need thousands.

Though that is impressive, there is no practical use for it.

“We shouldn’t get too carried away with this,” says Ciarán Gilligan-Lee at University College London. This is an important step in the era of quantum computing, but there’s still a long way to go, he says.


This feat isn’t particularly useful beyond producing truly random numbers – it was a proof of concept. It isn’t free of errors within the processor.

We still expect to see bigger and better processors to be built and used to do more useful calculations.

In a statement, Jim Clarke at Intel Labs said:

“Google’s recent update on the achievement of quantum supremacy is a notable mile marker as we continue to advance the potential of quantum computing,”

As with the leakage story, Google appears to have partnered with NASA to help test its quantum computer. In 2018, the two organizations made an agreement to do this, so the news isn’t entirely unexpected.

By Tuan Nguyen

machine-brain merge

Technology review – Neuralink

Imagine a brain machine that will be interfaced with the human brain, making anyone —– SUPERHUMAN.

Now, it will all be possible.

Thanks to the company Neuralink, which has been making headlines lately about its controversial brain-chip interface, finally revealing their top secret project via YouTube live stream last July 17, 2019.

The neuro-technology company aims to develop an implant device that will enable humans to control compatible technology with their thoughts. It will initially treat serious brain diseases and brain damage caused by stroke in the short-term.

With the said technology it had been working on, Neuralink also aims a long-term development for human enhancement, linking artificial intelligence (AI) with the power of the human mind.

Neuralink or the Neuralink Corp. is a tech startup founded by the futurist entrepreneur Elon Musk (also the CEO and founder of Tesla and Space X) in July 2016 to create ultra-high bandwidth brain-machine interfaces to connect humans and computers.” In 2017, the company said its initial goal was to devise brain interfaces to alleviate the symptoms of chronic medical conditions.

Tl; dr;

Neuralink is a neuro-technology company that aims to build implants that connect human brains with computer interfaces via artificial intelligence, making its initial use with paraplegics to perform simple tasks using phones or computers having no physical movement.

To be clear, the tech startup have not yet performed any human trials (hopefully by the end of 2020) but so far, it already started testing its prototypes on rodents and even “a monkey has been able to control the computer with his brain,” as told by its main founder Musk.

The company has been super secretive about the nature of its work since its founding in 2016.

Merging humans with AI by Neuralink implant

An artist’s visualization of how a Neuralink’s brain/computer interface would look.

An artist’s visualization of how a Neuralink’s brain/computer interface would look.

Source: NYT

Philosophers, sci-fi enthusiasts, and daydreamers have long been dreaming of turbocharging their brainpower or read someone else’s mind.

But… that’s way too far to be realized for now.

Neuralink, a company led by multi-entrepreneur Elon Musk have just been unravelling the first few sheets of the so-called ambitious sci-fi inspired implantable brain-computer interfaces (BCIs). The neuro-technology company’s device prototype name was derived from a science-fiction concept called Neural Lace that is part of the fictional universe in Scottish author Iain M. Banks‘ series of novels The Culture.

As reported by The Verge, the neuro-technology company’s first public project aims to implant some sort of device in people who have been paralyzed that will allow them to control their phones and computers in a way that resembles telepathy. It involves flexible “threads” (between 4 to 6μm) thinner than human hair and are less likely to cause neurological damage than the brain-machine implants currently on the market.

These very small threads are injected deep within a person’s brain to detect and record the activity of neurons. The information gathered by these tiny threads or wires, would then be passed to an external unit that could transmit to a computer, where the data can be used, forming a brain-machine interface.

The system can include “as many as 3,072 electrodes per array distributed across 96 threads,” according to a white paper credited to “Elon Musk & Neuralink.” The company is also developing a neurosurgical robot capable of inserting six threads into the human brain per minute.

neuralink implants



Each thread, as smaller than one tenth of the size of a human hair, contains 192 electrodes.

Each electrode group is encased inside a small implantable device that contains custom wireless chips, measuring four by four millimetres.

These threads would be precisely inserted into the brain by a tiny needle at the end of the robot (measures around 24 microns in diameter), targeting only specific areas of the brain and avoiding damaging any blood vessels.


In the future, Neuralink will aim to conduct operations that are safe and painless as having laser eye surgery.

As Neuralink president Max Hodak told The New York Times, currently the operation would require drilling holes into the patient’s skull to implant the threads.

But in the future, they hope to use a laser beam to pierce the skull with a series of tiny holes, which would not be felt by the patient.

By Tuan Nguyen

Technology review – WiFi 6

The next-generation wireless technology faster than 802.11ac (now known as Wi-Fi 5), Wi-Fi 6 or “802.11ax” or “AX WiFi”will be the newest version of the 802.11 standard for wireless network transmissions that we commonly call Wi-Fi.

It was developed by the Institute of Electrical and Electronics Engineers (IEEE) which is known to be the world’s largest association of technical professionals. The IEEE is basically the keeper of Wi-Fi, incorporated with committees responsible for developing it and establishing industry standards.

Wi-Fi 6 is originally built in response to the growing number of internet-connected devices, providing faster wireless speed, with increase deficiency, and reduced congestion in heavy bandwidth usage scenarios.

Wi-Fi 6 isn’t a new means of internet connectivity but rather an upgraded standard that compatible devices, particularly routers.

Meanwhile, some tech vendors are now introducing their Wi-Fi 6 compliant devices in the market. The Samsung’s Galaxy S10 was the first phone to support Wi-Fi 6, and the new iPhone 11 series from Apple. So expect to have labels identified with a WiFi signal indicator coupled with numeric representation of the connection in this case the number 6 on the packaging of some of the upcoming devices soon.

Note: The Wi-Fi Alliance has switched to version numbers away from the confusing and less memorable IEEE 802.11 standards.They won’t start offering Wi-Fi 6 certification for devices until the third quarter of 2019.

Wifi interface visual


Tl; dr;

Wi-Fi 6 is the next generation standard in Wi-Fi technology. It was previously known as 802.11ax until the Wi-Fi Alliance decided to rename it more succinctly. This latest Wi-Fi standard offers faster data transfer speeds having 40% higher compared to Wi-Fi 5, through more efficient data encoding. Thus, resulting in higher throughput.

Wi-Fi 6: The 6th Generation of Wi-Fi


wifi 6 specification

The Wi-Fi 6 router will connect more devices simultaneously

Source: TP-Link Technologies Co., Ltd.

Having today’s Wi-Fi 5 wireless standard, the new Wi-Fi 6 major upgrade is not just a change of routers but also a change of device and Wi-Fi connectivity in general.

Let’s see how this wonderful Wi-Fi standard upgrade could change our daily lives.

Faster Wi-Fi. 4 to 10 times faster than Wi-Fi 5, Wi-Fi 6 operates in dual-band frequency mode meaning, it handles a lot more bandwidth, thus, supporting faster data transfer speeds than its predecessor.

OFDMA. For increased efficiency, the Wi-Fi 6 standard supports OFDMA or the Orthogonal Frequency Division Multiple Access. It is a way of dividing a wireless channel into a large number of sub channels. Through these smaller channels, multiple devices can connect to the access point or router all at the same time without waiting for one device to finish first.

With OFDMA, each channel is chopped up into hundreds of smaller sub-channels, each with a different frequency. The signals are then turned orthogonally (at right angles) so they can be stacked on top of each other and de-multiplexed. With the bank analogy, imagine a teller being able to handle multiple customers when they are free. So customer one hands the teller a check and while that person is signing the check, the teller deals with the next customer, etc. The use of OFDMA means up to 30 clients can share each channel instead of having to take turns broadcasting and listening on each. ~ Network world

MU-MIMO. Also supporting the improved version of MU-MIMO, WiFi 6 antenna can transmit and receive from multiple other WiFi 6 devices concurrently. This allows a larger amount of data to be transferred at once and enables access points (APs) to handle larger numbers of devices simultaneously.

Backward compatible. Wi-Fi 6 is carefully designed to be maximally forward and backward compatible with 802.11a/g/n/ac. Meaning, if you use a Wi-Fi 6 router now, it can still support 802.11a/g/n/ac devices but of course you won’t get the full benefits of WiFi 6.

Target Wake Time (TWT). The access points of Wi-Fi 6 will be much smarter about scheduling whenever devices wake up and request information. This greatly helps devices to avoid interfering with each other, which, in turn, allowing them to spend more time in their battery-saving sleep modes.

Longer Battery Life. With TWT, your smartphones, laptops, and other Wi-Fi-enabled devices should have longer battery life, as well.

When the access point is talking to your smartphone, it can tell the device exactly when to put its Wi-Fi radio to sleep and exactly when to wake it up to receive the next transmission. This will be a great benefit for low-power “Internet of Things” devices that connect via Wi-Fi.


Whether you are at the crowded stadium, airport, hotel, office, apartment, or in the comforts of your home with everyone connected to Wi-Fi, the new Wi-Fi 6 technology (as addressed byIntel), will improve each user’s average speed by “at least 4 times” in congested areas with a lot of connected devices.

So no chances of getting slow Wi-Fi.

By Tuan Nguyen