[HOWTO] 100% CPU on Windows 10, MSI Gaming Laptop (GP62MVR 7RFX LEOPARD PRO)

Read just this:

If you have already reset the laptop’s power plan settings, and have set a AUTO fan speed within MSI Dragon Center, probably nothing else needs be done.

Description of problem:

Windows Task Manager shows CPU to be 100% used, most of the load is by a Windows process. The issue occurs on an MSI laptop, which has MSI own management software, such as System Control Manager (SCM) and/or MSI Dragon Center.

Analysis:

MSI’s own applications interact with MS Windows power plans to set up the operating parameters of the system: Fan(s) speed counted in rpm, settings for battery life vs performance, timeouts. WIN10 will also switch power plans when the laptop is getting plugged/unplagged from its charger, which also alters the behavior of the fans. MSI Dragon Center and Windows power management are very tightly knit. Also there seems to be a connection between the power plan and CPU consumption. The more performance oriented is the power plan, the higher the CPU consumption will be. The main finding is that the CPU 100% does not relate to any specific process, but the power plan in effect.

Ways to not see 100% in Task Manager:

Step 1, this is how Task Manager looks now:

CPU100

…and this is how SCM looks, with Dragon Center installed:

scmwithdragon

 

Step 2: Uninstall MSI Dragon Center

…and this is how SCM looks now:

scmwithoutdragon

Notice how the section with the different “modes” appeared. Now let’s select any mode other than “ECO off”, and get a screenshot of MS Task Manager

scmwithoutdragonnoECOoff

 

CPU005

…quite a difference. There is an immediate, notable difference with the fans speed, the laptop becomes immediately quiet. What else has happened?

Right-click on the Windows logo on Your screen, select “Power Options” and at the following screen “Additional power settings”. There, is a new power plan, created by SCM, and named after the selected “mode”, for example mine is called “Gaming”. I tried to compare the detailed configuration of this plan (“Change plan settings”) with the one that is causing the CPU to peak, and i was unable to find the exact parameter that creates the whole issue. In fact, the new power plan is quite “aggressive” in some of its settings (acccessed by “Change plan settings” under Control Panel > Hardware and Sound > Power Options > Edit Plan Settings )

SCMPowerPlan

So what now?

With MSI Dragon Center installed again, there is no way to create further SCM power plans. Using the Windows default ones will revert to 100% CPU, but look:

dragoncentervstaskmanager

 

What is correct?

If anybody knows, and feels confident of their Hardware/Windows skills, please leave a comment. For myself, i have decided to leave a less restricted plan for the fans, accepting to see the CPU 100% message, which i somehow thing is not crucial. This, because i am worried that trying to demote the cooling system might affect the well-being of the motherboard of my Leopard. I suppose, the choice is yours.

 

 

 

 

How to exit Debug Browser in R

This works if you have enabled debugging for a single command/function, and you are not aware how to exit the Browser prompt. To save you the trouble, popular Googles that ask you to f or Q or ESC or q() simply do not work. The behavior of debug command is identical in RStudio and the R prompt, at least in Windows 10 that i tested.

Here are two examples of enabling/disabling debug.

First we use a random sapply command:

> sapply(split(mtcars$mpg, mtcars$cyl), mean)
4 6 8
26.66364 19.74286 15.10000

Now we turn debugging on:

> debug(sapply)

and, as expected, nothing happens until we try to run sapply again:

> sapply(split(mtcars$mpg, mtcars$cyl), mean)
debugging in: sapply(split(mtcars$mpg, mtcars$cyl), mean)
debug: {
FUN <- match.fun(FUN)
answer <- lapply(X = X, FUN = FUN, ...)
if (USE.NAMES && is.character(X) && is.null(names(answer)))
names(answer) <- X if (!isFALSE(simplify) && length(answer)) simplify2array(answer, higher = (simplify == "array")) else answer } Browse[2]>

There are two things to type here. The first is the command undebug(). In this case will be sapply. Then, Q exits the Browser prompt, and we can run the function outside of debug mode again. Let’s see:

Browse[2]> undebug(sapply)
Browse[2]> Q
> sapply(split(mtcars$mpg, mtcars$cyl), mean)
4 6 8
26.66364 19.74286 15.10000

Identically with ls()

ls(mtcars)
[1] "am" "carb" "cyl" "disp" "drat" "gear" "hp" "mpg" "qsec" "vs"
[11] "wt"
> debug(ls)
> ls(mtcars)
debugging in: ls(mtcars)
debug: {
if (!missing(name)) {
pos <- tryCatch(name, error = function(e) e)
if (inherits(pos, "error")) {
name <- substitute(name)
if (!is.character(name))
name <- deparse(name)
warning(gettextf("%s converted to character string",
sQuote(name)), domain = NA)
pos <- name
}
}
all.names <- .Internal(ls(envir, all.names, sorted))
if (!missing(pattern)) {
if ((ll <- length(grep("[", pattern, fixed = TRUE))) &&
ll != length(grep("]", pattern, fixed = TRUE))) {
if (pattern == "[") {
pattern <- "\\["
warning("replaced regular expression pattern '[' by '\\\\['")
}
else if (length(grep("[^\\\\]\\[<-", pattern))) {
pattern <- sub("\\[<-", "\\\\\\[<-",
pattern)
warning("replaced '[<-' by '\\\\[<-' in regular expression pattern") } } grep(pattern, all.names, value = TRUE) } else all.names }

Browse[2]>undebug(ls)
Browse[2]> Q
> ls(mtcars)
[1] "am" "carb" "cyl" "disp" "drat" "gear" "hp" "mpg" "qsec" "vs"
[11] "wt"
>

Linear Algebra in R, create and invert matrices

This is a simple test case of creating a random 2×2 matrix, performing its inversion, and multiplying them. We will also use MS Excel to check our computations
Create a 2×2 matrix

> mNP <- matrix(rnorm(4),nrow=2,ncol=2)

This command used the matrix() function . This first argument, the datasets, uses rnorm() function to generate 4 random variates, and the next two arguments explain that those are to be ordered in a matrix of 2 rows by 2 columns. Now let’s display the matrix

> mNP
[,1] [,2]
[1,] 0.5644179 -0.4694577
[2,] 0.7707571 0.1500823

Next, we use solve() function to invert the matrix, and display the output

> mNP_inv <- solve(mNP)
> mNP_inv
[,1] [,2]
[1,] 0.3360953 1.051306
[2,] -1.7260381 1.263961

Finally, we use the %*% operator to to the algebraic multiplication of the matrices and check if we arrive to I

> mNP %*% mNP_inv
[,1] [,2]
[1,] 1 0
[2,] 0 1
>

If you would want to do the last operation in Excel, as a check, then let’s suppose that the first matrix sits in fields A1,B1,A2,B2 and the second one sits in fields D1,E1,D2,E2. Their multiplication product would be:
Top left, top right
=A1*D1+B1*D2 =A1*E1+B1*E2
Bottom left, bottom right
=A2*D1+B2*D2 =A2*E1+B2*E2

What I learned by casually studying Python for ten days

How difficult is Python

Looking back to what my generation considers “programming”, a term later changed to “development”, we see a gradual shift of programming languages, from tools that help us talk to a machine in its native language (which is the instruction set of its processor(s)), to a toolset that comes ever closer to understanding more business terms, and needs less delving into the binary reality of a processor. What has remained the same? The need to implement certain functionality, whether it be displaying a scatter plot on screen, or calculating a standard deviation, or handling files, anything that may be required.

From that point of view, programming has become “easier”, while development has become “harder”. This is not a controversial statement. It is easier by now, to create an array that will hold data, for example. Less and less complexity has to be dealt with, whether this is working with files, memory, or parallelizing. At the same time, the plethora of available tools, and the complexity of the modern IT ecosystem, along with the simplicity of tools to a degree, center the developer to direct a fully sized orchestra. Knowledge of procedural programming, of the libraries/tools relevant to the business, intimacy and “instinct” for the data at hand, are all considered necessary assets.

This is where Python, and similar solutions, like R, stand. Working with files, memory, writing your program logic is not very unlikely from using a previous generation language. Many features, like the argv[0] to get the executable path/name, is very similar to ANSI C. Where their true strength, and complexity lies, is for one to be skilled with the data and the available functions at hand. This may take much more time to learn, than just going through files and printing the infamous “Hello World”. Back to the original question, and keeping in mind the title of this article: Python is easy to “program”, yet can be infinitely hard to “develop”

Who should learn Python

Python is an analytical programming tool commonly associated with Machine Learning, Artificial Intelligence and Big Data. If engineering in those domains sounds interesting to you, it is probably time to start. While it may or may not be part of every solution, it is a very common tool to use, along with R

Simple things to get you started

First, install the environment. The packages are located here: https://www.python.org/downloads/ . If there is any request to my site i can do a step by step installation

If you plan to use graphics, choose a website that offers graphics functions, and become familiar. I started to using https://plot.ly/#/ this I seem to think I discovered through my Google feed

How to get inspired

Site https://www.learnpython.org/ offers online courses, to get one start using the language (i am not affiliated to them, but i did find their content useful). It is highly recommended, to really work on the exercises than scroll through the code. it took me a while to realize that tabs can indicate nested operations, for example. So going through the simplest examples and working your way up, is highly recommended.

Also, if I can create a plot from a CSV, and host the result online, then so can you! Have a look at my article here: Make a bar-chart from a CSV in Python

Make a bar-chart from a CSV in Python

plot.ly CSV bar plot
Plot.ly bar chart using a CSV

Test Case was implemented in Python 3.6.5 running on a Ubuntu Linux 18.04 64-bit virtual machine. In order to carry out this test-case you will need to create an account in plot.ly and create the credentials file on the host you will be running Python from. All instructions are on their web site

Step 1:
Suppose a CSV which has a first row we want to define as the X-Axis of our plot, and two further rows which we want as the data in the Y-Axis. It could be something like this:

~$ cat /home/nikolas/categories.csv
SciFi-Fantasy , 31.550787 , 68.449219
Spirituality , 83.411890 , 16.588112
Home-Improvement , 47.082787 , 52.917217
Gaming , 2.256584 , 97.743423
Mountain-Bike-Touring , 40.905171 , 59.094826
Korean-Culture , 71.040140 , 28.959862
Health-Safety , 32.872467 , 67.127533
Religion , 37.452973 , 62.547028
Fashion , 98.597282 , 1.402729

Step 2:
Load the CSV into a data frame using library Pandas using function read_csv and display the data of each row, using function iloc:

Python 3.6.5 (default, Apr 1 2018, 05:46:30)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> mycatg=pd.read_csv('/home/nikolas/categories.csv',sep=',',header=None)
>>> mycatg.iloc[:,0]
0 SciFi-Fantasy
1 Spirituality
2 Home-Improvement
3 Gaming
4 Mountain-Bike-Touring
5 Korean-Culture
6 Health-Safety
7 Religion
8 Fashion
Name: 0, dtype: object
>>> mycatg.iloc[:,1]
0 31.550787
1 83.411890
2 47.082787
3 2.256584
4 40.905171
5 71.040140
6 32.872467
7 37.452973
8 98.597282
Name: 1, dtype: float64
>>> mycatg.iloc[:,2]
0 68.449219
1 16.588112
2 52.917217
3 97.743423
4 59.094826
5 28.959862
6 67.127533
7 62.547028
8 1.402729
Name: 2, dtype: float64

Step 3:

Include the plot.ly libraries

import plotly.plotly as py
import plotly.graph_objs as go

then define the axis data, using the above way to address the rows in the data frame. so:

yaxis1 = go.Bar(
x=myres.iloc[:,0],
y=myres.iloc[:,1],
name='Category A'
)
yaxis2 = go.Bar(
x=myres.iloc[:,0],
y=myres.iloc[:,2],
name='Category B'
)

data = [yaxis1, yaxis2]
layout = go.Layout(
barmode='simple'
)

then perform the plot itself.

fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='barplot in Plot.ly, smalldeskbigdata.com')

Step 4:

This is it. The plot is created as a SVG file in plot.ly. In this case the graph was created here: https://plot.ly/create/?fid=NotNikolas63500%3A12

How to install a Linux Virtual Machine

Preface

You need two pieces of software to have Linux machine going under your laptop: The virtualization software, and an image of the operating system you plan to stage in your Virtual Machine (VM). Trying to keep this article as short as possible, any issues with hardware requirements and licensing for production use are left outside.

Step 1

Identify the combination of virtualization software and Linux platform you need. This demo will install Oracle VM VirtualBox and deploy UBUNTU 18.04 64 bit, on a laptop running Windows 10 Home

Step 2

Download the VirtualBox binaries here: https://www.virtualbox.org/wiki/Downloads

VirtualBox Download
Select your platform (“Windows hosts” in our case), choose Save, let it download and then run the executable.
Oracle VM installation
Go through the installation steps. As a start, you can leave everything default. Keep disk space in mind

Download the LINUX distribution:
https://www.ubuntu.com/download/desktop

Download the Linux release
Review the release notes, and observe the system requirements. Since you will running the OS as a VM, all requirements add to the requirements of your host operating system and your virtualization software

Ubuntu Desktop download
Choose to save the file. Ideally, keep its download path simple (ie c:\VMs\UBU1804). Avoid long paths or complicated names

Once done, it is time to launch the VirtualBox and create the virtual machine. Find the “Oracle VM VirtualBox” icon on your desktop or program group. It looks like this:

Capture
Double click to launch

When it has launched, click on the left-most icon in the toolbar (“New”) then provide a name for your VM, the type and the version. This should much the Linux distro you have downloaded.

Creating the VM
type a name of your choice, then put in Type and Version lists the version you have already downloaded

Memory selection
This entry should match the requirements of the distro (see above) but also the capacity of your machine.

In the next three pages of the install process you need to create a Virtual Hard Disk. The simplest choice is to select VHD (Virtual Hard Disk)/Fixed size. The distro notes should point to a minimum disk requirement (see above screenshot)
Capture
Minimum disk should be the requirement of the Linux distribution of choice, or larger

Capture
…this might take so time, be patient!

Capture
When this is finished, you will see your newly created machine in the list. Click on the green “Start” button to launch it for the first time

…The first time you launch your VM, it will ask
for a start-up disk. This will be the Linux distribution file that we downloaded:
Capture
The start-up disk for your Linux Virtual Machine is the distribution you have already downloaded. Notice the .ISO extension of the file

From then on, there will be a Welcome dialogue (similar to any startup installation such those that come in smartphones).
Capture
When running this dialogue within the Oracle VirtualBox, the “Try Ubuntu” option allows you to keep using the .ISO file, the “Install Ubuntu” option will use the Virtual Hard Disk we created previously and install the operating system there. In both cases, our laptop’s existing OS will not be affected

Ready VM
The result of your effort: a fully functional LINUX machine, with Internet access and device connectivity (USB headsets, mouse) just as your “real” laptop

Why Big Data, simply but not simplistically

Whether in human or machine intelligence, one can think of two main categories of solution to problems. The first, is the kind of problem that has a deterministic, rule-based solution. The second, is a problem that a decision, or outcome, cannot be derived by a mathematical formula or a correlation of factors that are both fairly constant in volume and with equal weight to each other. How are those problems solved? By data. lots, and lots of data. While how we go from X to Y (whether Y is a category, a yes/no answer or a prediction) may not be known, we have sufficient sets of (X,Y) to feel confident that we can apply different models and decide which fits the data-set the closest, with the least amount of error or uncertainty.

Current technology, both in hardware processing and in software solutions, has allowed us to design systems that can store and analyze such datasets, in a manner much more economic and scalable than before. Big Data are anything that encompasses those datasets. The data itself, the technology and software solutions to store them in a manner that is efficient at scale, the procedures to unify different data sources and generalize or prepare the data for decision making, the intuition of the Data Scientists that understand the nature of the data, and the choice of tools to be used for a certain application, are all parts of the Big Data revolution.

It would be interesting to discuss with comments from your side, what kind of problem you would categorize in which of the two cases (or possibly a different one). Thank you in advance