Proofreading Copy status has been achieved.

This commit is contained in:
shelldweller 2021-11-10 12:47:38 -07:00
parent 7d4fa10a02
commit 42dd93894c
2 changed files with 104 additions and 100 deletions

View File

@ -49,16 +49,16 @@ knitr::knit_hooks$set(chunk = function(x, options) {
## Introduction
Ransomware attacks are of interest to security professionals, law enforcement, and financial regulatory officials.$^{[1]}$ The pseudo-anonymous Bitcoin network provides a convenient method for ransomware attackers to accept payments without revealing their identity or location. The victims (usually hospitals or other large organizations) come to learn that much if not all of their important organizational data have been encrypted with a secret key by an unknown attacker. They are instructed to make a payment to a specific Bitcoin address by a certain deadline to have the data decrypted or else it will be deleted automatically.
Ransomware attacks are of interest to security professionals, law enforcement, and financial regulatory officials.$^{[1]}$ The pseudo-anonymous Bitcoin network provides a convenient method for ransomware attackers to accept payments without revealing their identity or location. The victims (usually hospitals or other large organizations) come to learn that much if not all of their important organizational data have been encrypted with a secret key by an unknown attacker. They are instructed to make a payment to a specific Bitcoin address by a certain deadline to have the data decrypted or else it will all be deleted automatically.
The deeper legal and financial implications of ransomware attacks are inconsequential to the work in this report, as we are merely interested in being able to classify Bitcoin addresses by their connection to ransomware transactions. Many researchers are already tracking illicit activity (such as ransomware payments) around the Bitcoin blockchain as soon as possible to minimize financial losses. Daniel Goldsmith explains some of the reasons and methods of blockchain analysis at [Chainalysis.com](https://www.chainalysis.com/).$^{[2]}$ For example, consider a ransomware attack conducted towards an illegal darknet market site. The news of such an attack might not be announced at all to prevent loss of trust among its users. By analyzing the transaction record with a blockchain explorer such as [BTC.com](https://btc.com/), suspicious activity could be flagged in real time given a sufficiently robust model. It may, in fact, be the first public notice of such an event. Any suspicious addresses could then be blacklisted or banned from using other services, if so desired.
The deeper legal and financial implications of ransomware attacks are inconsequential to the work in this report, as we are merely interested in being able to classify Bitcoin addresses by their connection to ransomware transactions. Many researchers are already tracking illicit activity (such as ransomware payments) around the Bitcoin blockchain in order to minimize financial losses. Daniel Goldsmith explains some of the reasons and methods of blockchain analysis at [Chainalysis.com](https://www.chainalysis.com/).$^{[2]}$ For example, consider a ransomware attack conducted towards an illegal darknet market site. The news of such an attack might not be announced at all to prevent loss of trust among its users. By analyzing the global transaction record with a blockchain explorer such as [BTC.com](https://btc.com/), suspicious activity could be flagged in real time given a sufficiently robust model. It may, in fact, be the first public notice of such an event. Any suspicious addresses could then be blacklisted or banned from using other services, if so desired.
Lists of known ransomware payment addresses have been compiled and analyzed using various methods. One well known paper entitled "BitcoinHeist: Topological Data Analysis for Ransomware Detection on the Bitcoin Blockchain"$^{[3]}$ will be the source of our data set and the baseline to which we will compare our results. In that paper, Akcora, et al. use Topological Data Analysis (TDA) to classify addresses on the Bitcoin blockchain into one of 28 known ransomware address groups. Addresses with no known ransomware associations are classified as *white*. The blockchain is then considered as a heterogeneous Directed Acyclic Graph (DAG) with two types of nodes describing *addresses* and *transactions*. Edges are formed between the nodes when a transaction can be associated with a particular address.
Any given address on the Bitcoin network may appear many times, with different inputs and outputs each time. The Bitcoin network data has been divided into 24-hour time intervals with the UTC-6 timezone as a reference, allowing for variables to be defined in a specific and meaningful way. For example, *speed* can be defined as the number of blocks the coin appears in during a 24-hour period, and provides information on how quickly a coin moves through the network. *Speed* may be an indicator of money laundering or coin mixing, as normal payments only involve a limited number of addresses in a given 24 hour period, and thus have lower *speeds* when compared to "mixed" coins. The temporal data can also help distinguish transactions by geolocation, as criminal transactions tend to cluster in time.
Any given address on the Bitcoin network may appear many times, possibly with different inputs and outputs each time. The Bitcoin network data has been divided into 24-hour time intervals with the UTC-6 timezone as a reference, allowing for variables to be defined in a specific and meaningful way. For example, *speed* can be defined as the number of blocks the coin appears in during a 24-hour period, and provides information on how quickly a coin moves through the network. *Speed* may be an indicator of money laundering or "coin mixing", as typical payments only involve a limited number of addresses in a given 24 hour period, and thus have lower *speeds* when compared to "mixed" coins. The temporal data can also help distinguish transactions by geolocation, as criminal transactions tend to cluster in time.
With the graph specified as such, the following six numerical features$^{[2]}$ are associated with a given address:
1) *Income* - the total amount of bitcoins sent to an address
2) *Neighbors* - the number of transactions that have this address as one of its output addresses
@ -95,7 +95,6 @@ if(!require(parallel)) install.packages("parallel")
if(!require(matrixStats)) install.packages("matrixStats")
if(!require(xtable)) install.packages("xtable")
if(!require(tictoc)) install.packages("tictoc")
#if(!require(kableExtra)) install.packages("kableExtra")
# Load Libraries
library(tidyverse)
@ -106,9 +105,8 @@ library(parallel)
library(matrixStats)
library(xtable)
library(tictoc)
#library(kableExtra)
# Set # of cores, use detectCores() - 1 to leave one for the system
# Set number of cores, use detectCores() - 1 to leave one for the system
n_cores <- detectCores()
# Download data
@ -156,16 +154,16 @@ my.msg.toc <- function(tic, toc, msg, info)
```
A summary of the data set shows the range of values and size of the sample. Some of the features, such as *weight* for example, already appear to be very skewed just from the quartiles. In the case of *weight*, the third quartile is only `r quantile(ransomware$weight, 0.75)`, meaning that 75% of the data is at or below this value for *weight* (with a minimum of `r min(ransomware$weight)`). The maximum *weight* value, however, is `r max(ransomware$weight)`. This that nearly the entire range of values occurs in the upper 25%. In fact, many of the numerical features are similarly skewed, as you can see in the following summary.
A summary of the data set shows the range of values and size of the sample. Some of the features, such as *weight* for example, already appear to be very skewed just from the quartiles. In the case of *weight*, the third quartile is only `r quantile(ransomware$weight, 0.75)`, meaning that 75% of the data is at or below this value for *weight* (with a minimum of `r min(ransomware$weight)`). The maximum *weight* value, however, is `r max(ransomware$weight)`. This means that nearly the entire range of values occurs in the upper 25%. In fact, many of the numerical features are similarly skewed, as you can see in the following summary.
```{r data_summary, echo=FALSE, size="tiny"}
# Summary
ransomware %>% summary() %>% knitr::kable(caption="Summary of data set")
ransomware %>% select(-address, -label) %>% summary() %>% knitr::kable(caption="Summary of data set")
```
This data set has 2,916,697 observations of ten features associated with a sample of transactions from the Bitcoin blockchain. The ten features include *address* as a unique identifier, the six features defined previously (*income, neighbors, weight, length, count, loop*), two temporal features in the form of *year* and *day* (day of the year as an integer from 1 to 365), and a categorical feature called *label* that categorizes each address as either *white* (i.e. not connected to any ransomware activity), or one of 28 known ransomware groups as identified by three independent ransomware analysis teams (Montreal, Princeton, and Padua)$^{[3]}$. A listing of the first ten rows provides a sample of the features associated with each observation.
This data set has 2,916,697 observations of ten features associated with a sample of transactions from the Bitcoin blockchain. The ten features include *address* as a unique identifier, the six numerical features defined previously (*income, neighbors, weight, length, count, loop*), two temporal features in the form of *year* and *day* (day of the year as an integer from 1 to 365), and a categorical feature called *label* that categorizes each address as either *white* (i.e. not connected to any ransomware activity), or one of 28 known ransomware groups as identified by three independent ransomware analysis teams (Montreal, Princeton, and Padua)$^{[3]}$. A listing of the first ten rows provides a sample of the features associated with each observation.
```{r data_head, echo=FALSE, size="tiny"}
@ -179,7 +177,7 @@ The original research team downloaded and parsed the entire Bitcoin transaction
### Goal
The goal of this project is to apply different machine learning algorithms to the same data set used in the original paper, producing an acceptable predictive model for categorizing ransomware addresses with an acceptable degree of accuracy. Increasing the precision, while not strictly necessary for the purposes of the project, would be a notable sign of success.
The goal of this project is to apply different machine learning algorithms to the same data set used in the original paper, producing a practical predictive model for categorizing ransomware addresses with an acceptable degree of accuracy. Increasing the precision, while not strictly necessary for the purposes of the project, would be a notable sign of success.
### Outline of Steps Taken
@ -206,7 +204,7 @@ The original research team downloaded and parsed the entire Bitcoin transaction
### Data Preparation
It is immediately apparent that this is a rather large data set. The usual practice of partitioning out 80% to 90% of the data for training results in a training set that is too large to process given the hardware limitations. For reasons that are no longer relevant, the original data set was first split in half with 50% reserved as *validation set* and the other 50% used as the *working set*. This working set was again split in half, to give a *training set* that was of a reasonable size to deal with. This produced partitions that were small enough to work with, so the partition size ratio was not further refined. This is a potential area for later optimization. A better partitioning scheme can surely be optimized further. Careful sampling was carried out to ensure that the ransomware groups were represented in each sample.
It is immediately apparent that this is a rather large data set. The usual practice of partitioning out 80% to 90% of the data for training results in a training set that is too large to process given the hardware limitations. For reasons that are no longer relevant, the original data set was first split in half with 50% reserved as *validation set* and the other 50% used as the *working set*. This working set was again split in half, to give a *training set* that was of a reasonable size to deal with. This produced partitions that were small enough to work with, so the partition size ratio was not further refined. This is a potential area for later optimization. A better partitioning scheme can surely be optimized further. Careful sampling was carried out to ensure that the ransomware groups were represented in each sample as much as possible.
```{r data_prep, echo=FALSE, include=FALSE}
@ -284,24 +282,25 @@ labels <- ransomware$label %>% summary()
```
The proportion of ransomware addresses in the original data set is `r ransomprop`. Thus, they make up less than 2% of all observations. This presents a challenge as the target observations are sparse within the data set, especially when we consider that this is then further divided into 28 subsets. In fact, some of the ransomware groups have only a single member, making categorization a dubious task.
The proportion of ransomware addresses in the original data set is `r ransomprop`. Thus, they make up less than 2% of all observations. This presents a challenge as the target observations are sparse within the data set, especially when we consider that this small percentage is then further divided into 28 subsets. In fact, some of the ransomware groups have only a single member, making categorization a dubious task.
The total number of `NA` or missing values in the original data set is `r no_nas`. At least there are no missing values to worry about. The original data set is clean in that sense.
A listing of all ransomware families in the full original data set, plus a member count for each family follows. As can be seen, `r length(unname(labels)[unname(labels)<10])` of the 28 families have less than 10 addresses associated with them. We shall keep this in mind for later.
A listing of all ransomware families in the full original data set, plus a member count for each family is shown in Table 3. As can be seen, `r length(unname(labels)[unname(labels)<10])` of the 28 families have less than 10 addresses associated with them. We shall keep this in mind for later.
```{r data_sparsness, echo=FALSE}
```{r ransomware_families, echo=FALSE}
# Print ransomware family summary table
knitr::kable(
list(labels[1:10], labels[11:20], labels[21:29]),
caption = 'Ransomware group labels and frequency counts for full data set',
booktabs = TRUE)
#%>%kable_styling(latex_options = "HOLD_position")
knitr::kable(list(labels[1:10], labels[11:20], labels[21:29]),
caption="Ransomware families and membership counts",
booktabs = TRUE,
format = "latex",
col.names = c("n") )
```
```
We can take a look at the overall distribution of the different features. The temporal features have been left out, since those plots are basically flat. The skewed nature of the non-temporal features causes the plots to look better on a log$_2$ scale $x$-axis.
We can take a look at the overall distribution of the different features. The temporal features have been left out. Those plots are essentially flat due to the capped nature of the address collection, making each day of the year equally represented across the set. The skewed nature of the non-temporal features causes the plots to look better on a log$_2$ scale $x$-axis.
```{r histograms, echo=FALSE, warning=FALSE, fig.align="center"}
########################################################
@ -323,29 +322,29 @@ histograms <- ggplot(train_long, aes(x = value)) +
geom_histogram(aes(y = ..density..), bins=20) +
geom_density(col = "green", size = .5) +
scale_x_continuous(trans='log2') +
facet_wrap(~ name, scales = "free")
facet_wrap(~ name, scales = "free") +
ggtitle("Histograms and densitiy plots for non-temporal features")
histograms + theme(axis.text.x = element_text(size = 8, angle=30, hjust=1))
```
Now let us compare the relative spread of each feature by calculating the coefficient of variation for each column. Larger coefficients of variation indicate larger relative spread compared to other columns.
We can easily compare the relative spread of each feature by calculating the coefficient of variation for each column. Larger coefficients of variation indicate larger relative spread compared to other columns. A listing of the coefficients of variation for the non-temporal features is shown in Table 4.
```{r cv_results, echo=FALSE, fig.align="center"}
```{r coefficients_of_variation, echo=FALSE}
# Summarize results in a table
# Summarize CV results in a table
knitr::kable(
list(coeff_vars[1:2], coeff_vars[3:4], coeff_vars[5:6]),
caption = 'Coefficients of Variation for each feature',
booktabs = TRUE)
#%>%kable_styling(latex_options = "HOLD_position")
format = "latex", booktabs = TRUE, caption="Coefficients of Variation",
col.names = c("CV") )
```
```
From this, it appears that `r selected_features[1]` has the widest range of variability, followed by `r selected_features[2]`. These are also the features that are most strongly skewed to the right, meaning that a few addresses have really high values for each of these features while the bulk of the data set has very low values for these numbers.
Taking the feature with the highest variation `r selected_features[1]`, let us take a look at the distribution for individual ransomware families. Perhaps there is a similarity across families. This can be done for all the features, but we will focus on `r selected_features[1]` in the interest of saving space and to avoid repetition and redundancy. The distribution plots for `r selected_features[1]` show the most variation anyway, since it is the feature with the highest coefficient of variation.
Taking the feature with the highest variation `r selected_features[1]`, we can take a look at the distribution for individual ransomware families to see if there is a similarity across families. This can be done for all the features, but we will focus on `r selected_features[1]` in the interest of saving space and to avoid repetition and redundancy. The distribution plots for `r selected_features[1]` show the most variation since it is the feature with the highest coefficient of variation, so it is a good one to focus on.
```{r variation_histograms, echo=FALSE, fig.height=2, fig.width=2.5, fig.show="hold", out.width='35%', warning=FALSE}
@ -629,27 +628,27 @@ The percentage of wallets with less than one hundred bitcoins as their balance i
### Insights gained from exploration
After visually and statistically exploring the data, it becomes clear what the challenge is. Ransomware-related addresses are very sparse, comprising `r ransomprop*100`% of all addresses. This small percentage is also further classified into 28 groups. Perhaps the original paper was a overly ambitious in trying to categorize all the addresses into 29 categories, including the vastly prevalent *white* addresses. To simplify our approach, we will categorize the addresses in a binary way: as either *white* or *black*, where *black* signifies an association with ransomware transactions. Asking this as a "ransomware or not-ransomware" question allows for application of methods that have been shown to be impractical otherwise.
After visually and numerically exploring the data, it becomes clear what the challenge is. Ransomware-related addresses are very sparse, comprising `r ransomprop*100`% of all addresses. This small percentage is also further classified into 28 groups. Perhaps the original paper was a overly ambitious in trying to categorize all the addresses into 29 categories, including the vastly prevalent *white* addresses. To simplify our approach, we will categorize the addresses in a binary way: as either *white* or *black*, where *black* signifies an association with ransomware transactions. Asking this as a "ransomware or not-ransomware" question allows for application of methods that have been shown to be impractical otherwise.
---
## Modeling approach
Akcora, et al. applied a Random Forest approach to the data, however "Despite improving data scarcity, [...] tree based methods (i.e., Random Forest and XGBoost) fail to predict any ransomware family".[3] Considering all ransomware addresses as belonging to a single group may improve the predictive power of such methods, making Random Forest worth another try.
Akcora, et al. applied a Random Forest approach to the data; however "Despite improving data scarcity, [...] tree based methods (i.e., Random Forest and XGBoost) fail to predict any ransomware family".$^{[3]}$ Considering all ransomware addresses as belonging to a single group may help to improve the predictive power of such methods, making Random Forest worth another try.
The topological description of the data set inspired a search for topological machine learning methods, although one does not necessitate the other. Searching for *topo* in the documentation for the `caret` package [6] resulted in the entry for Self Organizing Maps (SOMs), supplied by the `kohonen` package.[11] The description at CRAN [7] was intriguing enough to merit further investigation.
The topological description of the data set inspired a search for topological machine learning methods, although one does not necessitate the other. Searching for *topo* in the documentation for the `caret` package$^{[6]}$ resulted in the entry for Self Organizing Maps (SOMs), supplied by the `kohonen` package.$^{[11]}$ The description at CRAN$^{[7]}$ was intriguing enough to merit further investigation.
Initially, the categorization of ransomware into the 29 different families (including *white*) was attempted using SOMs. This proved to be very resource intensive, requiring more time and RAM than was available. Although it did help to illuminate how SOMs are configured, the resource requirements of the algorithm became a deterrent. It was at this point that the SOMs were applied in a binary way, classifying all ransomware addresses as merely *black*, initially in an attempt to simply get the algorithm to run to completion without error. This reduced RAM usage to the point of being feasible on the available hardware.
Self Organizing Maps were not covered in the coursework at any point, therefore a familiar method was sought out to compare the results to. Random Forest was chosen and applied to the data set in a binary way, classifying every address as either *white* or *black*, ignoring the ransomware families. Surprisingly, not only did the Random Forest approach result in an acceptable model, it did so much quicker than expected, taking only a few minutes to produce results.
It was very tempting to leave it there and write up a comparison of the two approaches to the binary problem by classifying all ransomware related addresses as *black*. However, a nagging feeling that more could be done eventually inspired a second look at the categorical problem of grouping the ransomware addresses into the 28 known families. Given the high accuracy and precision of the binary Random Forest approach, the sparseness of the ransomware in the larger set has been eliminated completely, along with any chances of false positives. There are a few cases of false negatives, depending on how the randomization is done during the sampling process. However, the Random Forest method does not seem to produce many false positive (if any), meaning it never seems to predict a truly white address as being black. Hence, by applying the Random Forest method first, we have effectively filtered out any possibility of false positives by correctly identifying a very large set of purely *white* addresses, which are then removed from the set. The best model used in the original paper by Akcora, et al. resulted in more false positives than true positives. This low precision rate is what made it impractical for real-world usage.[3]
It was very tempting to leave it there and write up a comparison of the two approaches to the binary problem by classifying all ransomware related addresses as *black*. However, a nagging feeling that more could be done eventually inspired a second look at the categorical problem of grouping the ransomware addresses into the 28 known families. Given the high accuracy and precision of the binary Random Forest approach, the sparseness of the ransomware in the larger set has been eliminated completely, along with any chances of false positives. There are a few cases of false negatives, depending on how the randomization is done during the sampling process. However, the Random Forest method does not seem to produce many false positive (if any), meaning it never seems to predict a truly white address as being black. Hence, by applying the Random Forest method first, we have effectively filtered out any possibility of false positives by correctly identifying a very large set of purely *white* addresses, which are then removed from the set. The best model used in the original paper by Akcora, et al. resulted in more false positives than true positives. This low precision rate is what made it impractical for real-world usage.$^{[3]}$
All of these factors combined to inspire a two-part method: first to separate the addresses into *black* and *white* groups, and then to further classify the *black* addresses into ransomware families. We shall explore each of these steps separately.
### Method Part 0: Binary SOMs
The first working model that ran to completion without exhausting computer resources ignored the ransomware family labels and instead used the two categories of *black* and *white*. The `kohonen` package provides algorithms for both supervised and unsupervised model building, using both Self Organizing Maps and Super Organizing Maps respectively.[11] A supervised approach was used since the data set includes information about the membership of ransomware families that can be used to train the model.
The first working model that ran to completion without exhausting computer resources ignored the ransomware family labels and instead used the two categories of *black* and *white*. The `kohonen` package provides algorithms for both supervised and unsupervised model building, using both Self Organizing Maps and Super Organizing Maps respectively.$^{[11]}$ A supervised approach was used since the data set includes information about the membership of ransomware families that can be used to train the model.
```{r binary_SOMs}
##############################################################################
@ -666,7 +665,7 @@ The first working model that ran to completion without exhausting computer resou
##############################################################################
# Start timer
tic("binary SOMs", quiet = FALSE, func.tic = my.msg.tic)
tic("Binary SOMs", quiet = FALSE, func.tic = my.msg.tic)
# Keep only numeric columns, ignoring dates and looped.
som1_train_num <- train_set %>% select(length, weight, count, neighbors, income)
@ -702,7 +701,6 @@ som1_train_list <-
grid_size <- round(sqrt(5*sqrt(nrow(train_set))))
# Based on categorical number, method 2
#grid_size = ceiling(sqrt(length(unique(ransomware$bw))))
grid_size
# Create SOM grid
som1_train_grid <-
@ -753,12 +751,12 @@ cm_bw.validation <-
validation$bw)
# End timer
toc(quiet = FALSE, func.toc = my.msg.toc, info = "INFO")
toc(quiet = FALSE, func.toc = my.msg.toc, info = "Run Time")
```
After training the model, we obtain the confusion matrices for the test set and the validation set, separately. As you can see, the results are very good in both cases.
After training the model, we obtain the confusion matrices for the test set and the validation set, separately. As you can see in Tables 5 and 6, the results are very good in both cases.
```{r binary_SOM_results, echo=FALSE, results='asis' }
@ -772,13 +770,13 @@ cm1_validation_set <- cm_bw.validation %>% as.matrix() %>%
cat(c("\\begin{table}[!htb]
\\begin{minipage}{.5\\linewidth}
\\caption{test set confusion matrix}
\\caption{Test set confusion matrix}
\\centering",
cm1_test_set,
"\\end{minipage}%
\\begin{minipage}{.5\\linewidth}
\\centering
\\caption{validation set confusion matrix}",
\\caption{Validation set confusion matrix}",
cm1_validation_set,
"\\end{minipage}
\\end{table}"
@ -787,9 +785,7 @@ cat(c("\\begin{table}[!htb]
```
This is a very intensive method compared to what follows.
It was left out of the final version of the script and has been included here only for model comparison and to track developmental evolution.
This is a very intensive method compared to what follows. It was left out of the final version of the script and has been included here only for model comparison and to track developmental evolution.
### Method Part 1: Binary Random Forest
@ -828,7 +824,7 @@ ransomware_y_hat_rf <- predict(fit_rf, ransomware)
cm_ransomware <- confusionMatrix(ransomware_y_hat_rf, ransomware$bw)
# End timer
toc(quiet = FALSE, func.toc = my.msg.toc, info = "INFO")
toc(quiet = FALSE, func.toc = my.msg.toc, info = "Run Time")
```
@ -843,11 +839,13 @@ cm2_test_set <- cm_test %>% as.matrix() %>%
# overall results
cm2_overall <- cm_test$overall %>%
knitr::kable(format = "latex", booktabs = TRUE)
knitr::kable(format = "latex", booktabs = TRUE,
col.names=c("score"))
# by class.
cm2_byClass <- cm_test$byClass %>%
knitr::kable(format = "latex", booktabs = TRUE)
knitr::kable(format = "latex", booktabs = TRUE,
col.names=c("score"))
# Confusion matrix for full ransomware set,
@ -856,29 +854,31 @@ cm3_full_set <- cm_ransomware %>% as.matrix() %>%
# overall results
cm3_overall <- cm_ransomware$overall %>%
knitr::kable(format = "latex", booktabs = TRUE)
knitr::kable(format = "latex", booktabs = TRUE,
col.names=c("score"))
# by class.
cm3_byClass <- cm_ransomware$byClass %>%
knitr::kable(format = "latex", booktabs = TRUE)
knitr::kable(format = "latex", booktabs = TRUE,
col.names=c("score"))
```
Here are the confusion matrices for the test set and the full set resulting from the Random Forest model, respectively. Note the high values of accuracy and precision.
Tables 7 and 8 show the confusion matrices for the test set and the full set resulting from the Random Forest model, respectively. Note the absence of false negatives (upper right hand corners), meaning that no truly *black* addresses were predicted to be *white*. The converse is not necessarily true, a few truly *white* addresses get marked as *black* (lower left hand corners).
```{r random-forest-comfusion_matrices, echo=FALSE, results='asis'}
# Print all three tables on one line
cat(c("\\begin{table}[!htb]
\\begin{minipage}{.5\\linewidth}
\\caption{test set confusion matrix}
\\caption{Test set confusion matrix}
\\centering",
cm2_test_set,
"\\end{minipage}%
\\begin{minipage}{.5\\linewidth}
\\centering
\\caption{full set confusion matrix}",
\\caption{Full set confusion matrix}",
cm3_full_set,
"\\end{minipage}
\\end{table}"
@ -887,22 +887,20 @@ cat(c("\\begin{table}[!htb]
```
The confusion matrix for the full ransomware set is very similar to that of the test set.
Overall results for test and full sets show very good results.
Tables 9 and 10 show the accuracy intervals for the test set and the full set, respectively.
```{r random-forest-overall_results, echo=FALSE, results='asis'}
# Print both tables on one line
cat(c("\\begin{table}[!htb]
\\begin{minipage}{.5\\linewidth}
\\caption{test set overall results}
\\caption{Test set accuracy}
\\centering",
cm2_overall,
"\\end{minipage}%
\\begin{minipage}{.5\\linewidth}
\\centering
\\caption{full set overall results}",
\\caption{Full set accuracy}",
cm3_overall,
"\\end{minipage}
\\end{table}"
@ -910,20 +908,20 @@ cat(c("\\begin{table}[!htb]
```
Results by class for the test and full sets. **What can you say about these, specifically?**
Tables 11 and 12 show the overall results for each set.
```{r random-forest-results_by_class, echo=FALSE, results='asis'}
# Print both tables on one line
cat(c("\\begin{table}[!htb]
\\begin{minipage}{.5\\linewidth}
\\caption{test set results by class}
\\caption{Test set results}
\\centering",
cm2_byClass,
"\\end{minipage}%
\\begin{minipage}{.5\\linewidth}
\\centering
\\caption{full set results by class}",
\\caption{Full set results}",
cm3_byClass,
"\\end{minipage}
\\end{table}"
@ -931,18 +929,16 @@ cat(c("\\begin{table}[!htb]
```
This is a much quicker way of removing most of the *white* addresses.
This method will be used in the final composite model to save time.
As can be seen from these results, Random Forest is a much quicker way of removing most of the *white* addresses, while providing a comparable level of accuracy and precision. This method will be used in the final composite model to save time.
### Method Part 2: Categorical SOMs
Now we train a new model after throwing away all *white* addresses. The predictions from the Random Forest model are used to isolate all *black* addresses for further classification into ransomware addresses using SOMs. The reduced set is then categorized using a supervised SOM method with the 28 ransomware families as the target classification groups.
Now we train a new model after removing all *white* addresses. The predictions from the Random Forest model are used to isolate all *black* addresses for further classification into ransomware addresses using SOMs. The reduced set is then categorized using a supervised SOM method with the 28 ransomware families as the target classification groups.
```{r soms-families, warning=FALSE}
# Start timer
tic("categorical SOMs", quiet = FALSE, func.tic = my.msg.tic)
tic("Categorical SOMs", quiet = FALSE, func.tic = my.msg.tic)
# Now use this prediction to reduce the original set to only "black" addresses
# First append the full set of predictions to the original set.
@ -1019,52 +1015,48 @@ cm_labels <- confusionMatrix(ransomware_group.prediction$prediction[[2]],
test_set$label)
# End timer
toc(quiet = FALSE, func.toc = my.msg.toc, info = "INFO")
toc(quiet = FALSE, func.toc = my.msg.toc, info = "Run Time")
```
When selecting the grid size for a Self Organizing Map, there are at least two different schools of thought. The two that were tried here are explained (with supporting documentation) on a Researchgate forum.[8] The first method is based on the size of the training set, and in this case results in a larger, more accurate map. The second method is based on the number of known categories to classify the data into, and in this case results in a smaller, less accurate map. For this script, a grid size of `r grid_size` has been selected.
When selecting the grid size for a Self Organizing Map, there are at least two different schools of thought. The two that were tried here are explained (with supporting documentation) on a Researchgate$^{[8]}$ forum. The first method is based on the size of the training set, and in this case results in a larger, more accurate map. The second method is based on the number of known categories to classify the data into, and in this case results in a smaller, less accurate map. For this script, a grid size of `r grid_size` has been selected.
A summary of the results for the categorization of black addresses into ransomware families follows. For the full table of predictions and statistics, see the Appendix.
Here are the overall results of the final categorization.
Table 13 shows the overall results of the final categorization.
```{r cm_overall, echo=FALSE}
# Overall section of the confusion matrix formatted through kable()
cm_labels$overall %>%
knitr::kable(caption="overall categorization results")
#%>% kable_styling(latex_options = "HOLD_position")
knitr::kable(caption="Overall categorization results",
col.names = c("score") )
```
Here are the final results by class.
Table 14 shows the final results by class. It appears that many of the families with lower membership were not predicted at all. In fact, all the addresses classified as *black* by the Random Forest method have been grouped into only 7 families, a quarter of the actual 28. The relatively high accuracy rate would suggest that the larger families were predicted correctly, and that the smaller families were lumped in with the most similiar of the larger families. This could be an area for further refinement of the second SOM algorithm.
```{r soms-output-by-class, echo=FALSE, size="tiny"}
# By Class section of the confusion matrix formatted through kable()
cm_labels$byClass %>%
knitr::kable(caption="categorization results by class")
#%>% kable_styling(latex_options = "HOLD_position")
knitr::kable(caption="Categorization results by class")
```
\newpage
### Map Visualizations and Clusterings
### Clustering Visualizations
Toroidal neural node maps are used to generate the models, and can be visualized in a number of ways. The toroidal nature means that the top and bottom edges can be matched together, and the same with the left and right edges, forming a toroid, or donut shape.
Heatmaps and K-means clustering
The Training progress plot shows how many iterations the model had to undergo before the distances on the map stabilized. The Mapping plot is a visual representation of the individual observations and where they lie in the two-dimensional grid generated by the model. The Quality plot shows the average distance between addresses in each cell. The Counts plot gives a measure of the number of observations in each cell of the grid.
Toroidal neural node maps are used to generate the models, and can be visualized n a number of ways.
Describe these separately?
```{r categorical som graphs, echo=FALSE, fig.show="hold", out.width='35%'}
```{r categorical som graphs, echo=FALSE, fig.show="hold", out.width='50%'}
# SOM visualization plots
# Visualize training progress
plot(som_model2, type = 'changes', pch = 19, palette.name = topo.colors)
# Visualize neural network mapping
plot(som_model2, type = 'mapping', pch = 19, palette.name = topo.colors)
@ -1074,6 +1066,15 @@ plot(som_model2, type = 'quality', pch = 19, palette.name = topo.colors)
# Visualize counts
plot(som_model2, type = 'counts', pch = 19, palette.name = topo.colors)
```
We can also look at heatmaps for each of the non-temporal features. This is where the grouping and the toroidal nature of the maps starts to become apparent. The color represents the average value for that feature in that cell.
```{r heatmaps, echo=FALSE, fig.show="hold", out.width='50%'}
# Visualize heatmap for variable 1
plot(som_model2, type = 'property', property = som_model2$codes[[1]][,1],
main=colnames(train_num)[1], pch = 19, palette.name = topo.colors)
@ -1101,8 +1102,7 @@ plot(som_model2, type = 'property', property = som_model2$codes[[1]][,6],
```
The code plots show how much of each feature is represented by each cell in the map. For large numbers of categories (like what we have with the ransomware families), the default behavior is to make a line plot instead of a segment plot, which leads to the density-like patterns to the right. In the left plot, the codebook vectors of the features used in the model are shown. These can be directly interpreted as an indication of how
likely a given class is at a certain unit.
The code plots show how much of each feature is represented by each cell in the map. For large numbers of categories (such as with the ransomware families), the default behavior is to make a line plot instead of a segment plot, which leads to the density-like patterns to the right. In the left plot, the codebook vectors of the features used in the model are shown. These can be directly interpreted as an indication of how likely a given class is at a certain unit. The standard code plot creates these pie representations of the corresponding vectors for the grid cells. The radius of a wedge represents the magnitude in a particular dimension. From these, visual patterns start to emerge, as similar addresses are grouped together according to similarities of pie representations.
```{r fan diagrams graphs, echo=FALSE}
@ -1112,9 +1112,10 @@ plot(som_model2, type = 'codes', pch = 19, palette.name = topo.colors,
```
K-means clustering offers a nice way of visualizing the final SOM grid and the categorical boundaries that were formed by the model.
Clustering offers a nice way of visualizing the final SOM grid and the categorical boundaries that were formed by the model. Ideally, it is a visual representation of the final grouping. There are multiple algorithms for doing this.
K-means clustering is said to be better for smaller maps, while Hierarchical clustering is supposed to be better for larger maps. In this case, Hierarchical clustering does not converge on the right number of groups, while K-means requires the number of groups be specified ahead of time. Since we already know how many ransomware families are represented by the data set, K-means clustering is used to visualize the final categorization of the data on the map.
Say a bit more about it here....
```{r clustering-setup, echo=FALSE, include=FALSE}
#############################################################################
@ -1129,11 +1130,10 @@ n_groups <- length(unique(ransomware$label)) - 1
# Generate k-means clustering
som.cluster <- kmeans(data.frame(som_model2$codes[[1]]), centers=n_groups)
```
K-means clustering categorizes the SOM grid by adding boundaries to the classification groups. This is the author's favorite graph in the entire report.
```{r clustering-plot, echo=FALSE, fig.align="center"}
```{r clustering-plots, echo=FALSE, fig.align="center"}
# Plot K-means clustering results
plot(som_model2,
@ -1183,7 +1183,7 @@ add.cluster.boundaries(som_model2, som.cluster$cluster)
### Comparison to results from original paper
In the original paper by Akcora et al., they tested several different sets of parameters on their TDA model. According to them, "In the best TDA models for each ransomware family, we predict **16.59 false positives for each
true positive**. In turn, this number is 27.44 for the best non-TDA models."[3] In fact, the **highest** precision [a.k.a. Positive Predictive Value, defined as TP/(TP+FP), where TP = the number of true positives, and FP = the number of false positives] they achieved was only 0.1610. By comparison, although several of our predicted classes had zero or NA precision values due to low family membership in some cases, the **lowest** non-zero precision value is `r toString(min(cm_labels$byClass[,5][which(cm_labels$byClass[,5] > 0)]))`, with many well above that, approaching one in a few cases.
true positive.** In turn, this number is 27.44 for the best non-TDA models."$^{[3]}$ In fact, the **highest** precision [a.k.a. Positive Predictive Value, defined as TP/(TP+FP), where TP = the number of true positives, and FP = the number of false positives] they achieved was only 0.1610. By comparison, although several of our predicted classes had zero or NA precision values due to low family membership in some cases, the **lowest** non-zero precision value is `r toString(min(cm_labels$byClass[,5][which(cm_labels$byClass[,5] > 0)]))`, with many well above that, equaling one in a few cases.
One might say that we are comparing apples to oranges by benchmarking single method model with a two-method stack. The two-model approach is justified and seems superior in this case, especially when measured in terms of total run time and having the benefit of avoiding false positives to a great degree.
@ -1194,15 +1194,17 @@ true positive**. In turn, this number is 27.44 for the best non-TDA models."[3]
### Future Work
I only scratched he surface of the SOM algorithm which seems to have many implementations and parameters that could be investigated further and possibly optimized via cross-validation. For example, the grid size used to train the SOM was calculated using an algorithm based on the size of the training set, and while this performed better than a grid size based on the number of categories, this may not be ideal. Optimization around grid size could still be carried out.
I only scratched he surface of the SOM algorithm, which seems to have many implementations and parameters that could be investigated further and possibly optimized via cross-validation. For example, the grid size used to train the SOM was calculated using an algorithm based on the size of the training set, and while this performed better than a grid size based on the number of categories, it may not be ideal. Optimization around grid size could still be carried out. Hexagonal grids with toroidal topology were the only type used. Other types, such as square grids and non-toroidal topology are also possible, and may also be worth investigating.
A dual Random Forest approach could be used to first isolate the ransomware addresses as well as classify them might be quick enough to run in under ten minutes on all the hardware listed. Conversely, a dual SOM method could be created for maximum precision if the necessary computing resources were available.
The script itself has a few areas that could be further optimization. The sampling method does what it needs to do, but the ratios taken for each set could possibly be optimized further.
The script itself has a few areas that could be further optimization. The sampling method does what it needs to do, but the ratios taken for each set could possibly be optimized further. The second SOM algorithm could be optimized to correctly predict more of the low-membership families.
Hierarchical clustering was attempted in addition to K-means clustering. The correct number of families was difficult to achieve, whereas it is a direct input of the K-means method. Another look at the clustering techniques might yield different results. Other clustering techniques exist, such as "Hierarchical K-Means"$^{[13]}$, which could be explored for even more clustering visualizations.
### Conclusion
This report presents a reliable method for classifying Bitcoin addresses into known ransomware families, while at the same time avoiding false positives by filtering them out using a binary method before classifying them further. It leaves the author of the paper wondering how much harder it would be to perform the same task for ransomware that uses privacy-oriented coins. Certain cryptocurrency networks utilize privacy features, such as Monero, that obfuscate transactions from being analyzed in the same way that the Bitcoin network has been analyzed here. Some progress has been made towards analyzing these networks[9], while the developers of such networks continually evolve the code to complicate transaction tracking. This could be another good area for future research.
This report presents a reliable method for classifying Bitcoin addresses into known ransomware families, while at the same time avoiding false positives by filtering them out using a binary method before classifying them further. It leaves the author wondering how much harder it would be to perform the same task for ransomware that uses privacy-oriented coins. Certain cryptocurrency networks, such as Monero, utilize privacy features that obfuscate transactions from being analyzed in the same way that the Bitcoin network has been analyzed here. Some progress has been made towards analyzing these networks$^{[9]}$. At the same time, the developers of such networks continually evolve the code to complicate transaction tracking. This could be another good area for future research.
## References
@ -1236,18 +1238,20 @@ Software_, *87*(7), 1-18. doi: 10.18637/jss.v087.i07 (URL: https://doi.org/10.18
Statistical Software_, *21*(5), 1-19. doi: 10.18637/jss.v021.i05 (URL:
https://doi.org/10.18637/jss.v021.i05).
[12] Difference between K means and Hierarchical Clustering (Jul 07, 2021) https://www.geeksforgeeks.org/difference-between-k-means-and-hierarchical-clustering/
[13] Hierarchical K-Means Clustering: Optimize Clusters (Oct 15 2021) https://www.datanovia.com/en/lessons/hierarchical-k-means-clustering-optimize-clusters/
\newpage
## Appendix:
### Categorical SOM prediction table and confusion matrix
Here are the full prediction results for the categorization of *black* addresses into ransomware families. It is assumed that all *white* address have already been removed.
Here are the full prediction results for the categorization of *black* addresses into ransomware families. It is assumed that all *white* address have already been removed.
```{r soms-output-table, echo=FALSE}
# Final results of categorization of "black" addresses
# into ransomware families.
cm_labels
# Final results: categorization of "black" addresses into ransomware families.
cm_labels
```

Binary file not shown.