Thursday, March 5, 2020

Week Six

Week 6


In the sixth week, the poster's layout was created, to then be filled with the required data in each section. The poster was finished with the help of our supervisor Dr. Xin Tu.


After finishing the poster and printing it out, each part was divided between the group members where each one practiced his section and presented it in front of the other members.



Monday, February 24, 2020

Week Five

Week 5


In week 5, the user interface is designed and programmed linked with the original neural network. The program is completed on Pycharm as well.

The package used for this user interface is PYQT which is a powerful assist package for UI construction in Python.

The interface is shown below:



Here are the steps of using this UI:
  • Selecting the input file
  • Press the simulate button
  • The program will process the input data with the ANN 
  • The prediction will be saved into an Excel file
  • The program will open the file automatically and show the prediction result



Monday, February 17, 2020

Week Four

Week 4


In week Four, most of the programming and plotting was made:

1) Comparison of pre_exp:
The predicted data generated by the trained Neural Network programme are compared with the experimental data. In the graph, predicted data are on y-axis.


scatter (exp_conversion, pre_conversion)



scatter (exp_efficiency, pre_efficiency)

2) Effect of parameters:
The experimental data and simulated data are combined together to investigate the effect of the 4 parameters on convesion rate and energy efficiency respectively.




3) Importance of parameters:
The 4 parameters are constrained in the same range and see the change of conversion and energy efficiency within that range caused by the increment of different parameters are shown on the graph. From the trend of the lines on the graph, the importance of parameters can be observed.


4) Neuron number:
The Neural Network for this programme has 2 layers. In order to find the best fit of neuron number for each layer, the effect of different neuron number combinations is compared by the 3-D graph.




Monday, February 10, 2020

Week Three

Week 3


The perfect conditions for this experiment would be as illustrated in the following scheme representing the ideal experiment system , where each stop would have certain requirements to be achieved so that output would be as expected.


                                                             Figure 2 (Artificial Neural Network)


Naphthalene and Nitrogen gases are mixed together through a high temperature of 200 °C. Then, the combined gases are injected into the GAD reactor. Flow of Nitrogen is kept at 4 L/min while concentration of Naphthalene 1.1-2 mg/L. Naphthalene reduction happens in Gliding.Arc.Discharge reactor. Distance between the two metal electrodes in the GAD and the diameter of gas nozzle is 2 mm and 1.5 mm respectively.

Data required to create A.N.N  as follow:

Carbon balance (%):

C2H2 yield:

Discharge Power:

We also updated the algorithms to achieve higher simulation accuracy:

Linear regression is the fundamental method that we use in this Neural Network project. It’s a linear approach to modelling the relationship between the inputs and the outputs. Inputs are multiplied with coefficients to try to get close to the output value.

In neural network, the coefficients are called weights, and the inputs are multiplied with weights and added with bias and then they are sent into the activation function in each neuron to obtain the results of one layer. The process is repeated in the following layers. This is called the forward propagation. The final result will go through the loss function which simply describe the error between the predicted results and the actual data. That’s what the network does in one epoch.

Then the network will start from the loss function to decide the best fitting weights and bias for the project. This is called back propagation because it goes the opposite way from the first step.

That’s the basic process of NN, but there is a serious problem which is called overfitting. It’s a phenomenon when the model corresponds too closely to the training data and is unable to fit additional data reliably.

So first, we introduce regularization to alleviate this problem. Regularization, to be brief, makes the network select the smaller weights and avoid choosing large weights. Large weights cause overfitting because it makes the network focus too much on one characteristic.

Second, we use dropout method, which drops out a batch of data in each epoch to avoid the program to concentrate too much on the whole set of data.

There is also another problem. If the input data has different scale, for example the speed may be measured in hundreds of meters per second but the efficiency calculated won’t be larger than 1. In this case, speed will have much larger effect on the output than efficiency when training which may not correspond to the real situation. Thus, we use normalization, which convert the input into values around 0 and 1 according to the ratio of value so that it eliminates the effect I just said.

Monday, February 3, 2020

Week Two

Week 2 


Research about the neural network was done, and we were able to understand the basic structure of NN:



Fundamental Algorithms of NN:
- Linear Regression Analysis:
A type of predictive analysis, its idea is to see 2 things. Firstly, does the variable set do a good job predicting dependant variables? Secondly, which particular variables significantly predict the outcome and what way they indicate to magnitude?
The simplest form of regression equation:
y = c + b*x
where
y: estimated dependant variable score
c: constant
b: regression coefficient
x: score on the independent variable 

- Forward propagation:
One of the core processes during the learning phase. Where input information (data) is inserted in a forward direction throughout the network, passing through hidden layers where it is processed then passed to successive layer.

- Loss function:
The objective function is a function used to evaluate a candidate solution when we want to minimise it, we call it loss function.

- Activation function:
A function used to achieve the output of node (transfer function).

- Backpropagation:
Gradient descent:
This method of Backpropagation operates in a way that allows it to find a minimum of a function by starting at a random location in parameter space after that reduces the error until it reaches a local minimum.





Monday, January 27, 2020

Week One

Week 1


In the first week, tasks were divided between individuals within the group, and so, intensive research was done by everyone regarding their own tasks. Overall research was done first so that members would have an idea about the topic.

Through the research, more about neural networks was found, such as neurons are neural networks' basic unit of information processing.

These are 3 basic elements of neuron model:
- An adder, which sums corresponding synaptic weights of input to stem neurons.
- An activation function (suppression function), which limits neuron's output amplitude. Normal amplitude range of neuron output is written as [0, 1] or another interval [-1, +1]
- Link chain (synapse), with each distinguished by weight or intensity. Synaptic weight subscript is important. The input signal is multiplied by the synaptic weight. Unlike human's brains, artificial neurons only have one range of synaptic weights with only positive or negative values.



 Figure 1 (Artificial Neural Network)