TrabalhosGratuitos.com - Trabalhos, Monografias, Artigos, Exames, Resumos de livros, Dissertações
Pesquisar

O Machine Learning

Por:   •  24/10/2022  •  Projeto de pesquisa  •  1.324 Palavras (6 Páginas)  •  96 Visualizações

Página 1 de 6

Using Neural Networks to Identify the Real Position of Acoustics Sources in Beamforming Maps.

João Pedro Rebouças Maia, Guilherme Lellis Cuellar, Filipe Ramos do Amaral¹, Taylor B. Spalt², Carlos do Carmo Pagani Júnior³, Experimental Campus of São João da Boa Vista, Aeronautical Engineering, jp.maia@unesp.br.

¹ e-mail:framaral@gmail.com, Institituto Tecnologico de Engenharia- ITA

² e-mail:taylor.b.spalt@gmail.com

³ e-mail:pagani@unesp.br, Universidade Estadual Paulista – Campus de São João Da Boa Vista

I.INTRODUCTION

The need to build quieter aircrafts is very important in civil and military areas. For this to be possible, it is necessary to use methods that identify the noise sources in the aircraft structure, allowing the designers to intervene in the noise generating element and, finally, develop a quieter structure. The Beamforming method, which consists of the distribution of microphone arrays in a wind tunnel, allows the measurement of sound pressure in different locations of the wind tunnel, which allows the construction of an acoustic image in which it is possible to visualize the possible regions of acoustic sources. Beamforming, however, has limitations that makes the differentiation between  a real noise source from a spurious noise source a hard task. So developing an artificial intelligence that is able to interpret a Beamforming map, quickly, indicating the positions and characteristics of all real noise sources, is very relevant for the acoustic improvement of aeronautical structures.

The neural networks became really common way to use machine learning to resolve complex problems like multiclass classification problems. The feedfoward models with backpropagation learning and cost functions were vastly used in the last two decades, but the interactive tuning of the network parameters can sometimes bring a lot of problems if not done correctly. The         extreme Learning Machine neural network model ELM offers a lot of advantages when compared to the Square Vector Machine models (SVM) or the Least Square Vector Machine  models (LSVM) because interactive tuning is not required, in that way there is no need of previous information to determinate the optimum weights and biases values of the single hidden layer neurons.

 

        

II.OBJETIVES

 Recognize Beamforming techniques and apply them to training artificial intelligence systems to identify sound sources and obtain their acoustic spectra in acoustics fields produced by free turbulent flows.

III. THE HIDDEN NEURONS STRUCTURE

The neuron used in the hidden layer could be represented by the following equation

Single neuron output = [pic 1]

Where a is the activation vector, that is related with the CSD in the observer, and w is a randomly generated weight to the activation vector and b a bias.  That’s a sigmoid structure and it works retaining the neuron output function in the real domain and limiting its images by any real number between 1 and -1.

In that form the weighted sum of all hidden layers outputs, that will latter be used to indicate the output choice of the network, can be more easily analyzed and tuned.  

[pic 2]

Figura 1. Sigmoid function

IV. NETWORK STRUCTURE

 

The Incremental Extreme Learning Machine (I-ELM), that is a feedforward neural network with only one hidden layer where the hidden nodes have random parameters and the layer size will be incremented by successive additions of new hidden nodes according to the proximity between the results on the output layer and the expected results. The new hidden nodes () also have random parameters. What ensures the learn, i.e., that the results now will be closer from the expected, is that the output weight of this node is based in residual error () and in the vector  that store the activations values for all N training samples.[pic 3][pic 4][pic 5]

        

 [pic 6]

Figura 2. Single hidden layer ELM

In this case the neural network is built in order to find the relationship between a specific set of data collected by the antenna (microphones) and the CSD of a source with characteristics parameters.

Using the PSD as input and the CSD (in the microphones) as activation vectors and based in the comparison of the residual error (E) with the expected error (e) the size (L) of the layer will be setted from L=0 to L=Lmax in order to create a limit to the following training step:

        {While  L<Lmax and E>e:  

                L=L+1               #create new neuron in the hidden layer

=   #set the best values for the new neuron parameters [pic 7][pic 8]

                E = E-    #calculate the new residual error [pic 9]

Endwhile

}By this form all the hidden nodes parameters (weights and biases) are generated in order to minimize the residual error of the network.

The output layer will be responsible to associate the weighted sum of all sigmoid hidden layer neurons with the source CSD that generated the respective input values.

V. SOUND SOURCE STRUCTURE

Knowing the sound structure is important to build the neuron activation vector. In this project the sound data used are based in a jet flow model that provide us the knowing source CSD as:

<𝑆(𝑦1,𝜔)𝑆∗(𝑦1′,𝜔)>   = )  [pic 10]

The observer Pressure spectral density (PSD): 

<(𝑝𝒙),(𝜔𝑝𝒙,𝜔)>≈   [pic 11]

the observer cross spectral density(CSD):

<(𝑝𝒙),𝜔𝑝∗(𝒙′,𝜔)>=𝑅𝑆𝒚,𝜔𝑆𝒚′,𝜔𝑅

That model uses a wave packet model to describe the jet flow dynamic.

...

Baixar como (para membros premium)  txt (8.6 Kb)   pdf (1.1 Mb)   docx (1.7 Mb)  
Continuar por mais 5 páginas »
Disponível apenas no TrabalhosGratuitos.com