Monday, February 10, 2020

Week Three

Week 3


The perfect conditions for this experiment would be as illustrated in the following scheme representing the ideal experiment system , where each stop would have certain requirements to be achieved so that output would be as expected.


                                                             Figure 2 (Artificial Neural Network)


Naphthalene and Nitrogen gases are mixed together through a high temperature of 200 °C. Then, the combined gases are injected into the GAD reactor. Flow of Nitrogen is kept at 4 L/min while concentration of Naphthalene 1.1-2 mg/L. Naphthalene reduction happens in Gliding.Arc.Discharge reactor. Distance between the two metal electrodes in the GAD and the diameter of gas nozzle is 2 mm and 1.5 mm respectively.

Data required to create A.N.N  as follow:

Carbon balance (%):

C2H2 yield:

Discharge Power:

We also updated the algorithms to achieve higher simulation accuracy:

Linear regression is the fundamental method that we use in this Neural Network project. It’s a linear approach to modelling the relationship between the inputs and the outputs. Inputs are multiplied with coefficients to try to get close to the output value.

In neural network, the coefficients are called weights, and the inputs are multiplied with weights and added with bias and then they are sent into the activation function in each neuron to obtain the results of one layer. The process is repeated in the following layers. This is called the forward propagation. The final result will go through the loss function which simply describe the error between the predicted results and the actual data. That’s what the network does in one epoch.

Then the network will start from the loss function to decide the best fitting weights and bias for the project. This is called back propagation because it goes the opposite way from the first step.

That’s the basic process of NN, but there is a serious problem which is called overfitting. It’s a phenomenon when the model corresponds too closely to the training data and is unable to fit additional data reliably.

So first, we introduce regularization to alleviate this problem. Regularization, to be brief, makes the network select the smaller weights and avoid choosing large weights. Large weights cause overfitting because it makes the network focus too much on one characteristic.

Second, we use dropout method, which drops out a batch of data in each epoch to avoid the program to concentrate too much on the whole set of data.

There is also another problem. If the input data has different scale, for example the speed may be measured in hundreds of meters per second but the efficiency calculated won’t be larger than 1. In this case, speed will have much larger effect on the output than efficiency when training which may not correspond to the real situation. Thus, we use normalization, which convert the input into values around 0 and 1 according to the ratio of value so that it eliminates the effect I just said.

No comments:

Post a Comment