Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
0 votes
0 answers
47 views

Is it Possible to feed Embeddings generate by BERT to a LSTM based autoencoder to get the latent space?

I've just learn about how BERT produce embeddings. I might not understand it fully. I was thinking of doing a project of leveraging those embeddings and feed it to an autoencoder to generate latent ...
Nik Imran's user avatar
0 votes
0 answers
184 views

Tensor Flow Error: required broadcastable shapes when training Variable Auto Encoder for Text Posts

Good morning, I'm attempting apply and adapt a variational auto encoder that I found here to a dataset consisting of news headlines. The data will feed into the neural network, but the neural network ...
Paul's user avatar
  • 11
2 votes
1 answer
508 views

TextVectorization and Autoencoder for feature extraction of text

I'm trying to solve a problem which is as follows: I need to train the autoencoder to extract useful data from text. I will use the trained autoencoder in another model to extract features. The goal ...
MRL's user avatar
  • 43
0 votes
1 answer
209 views

Sentence VAE Loss Layer Implementation On Keras Giving Issues

So I've been implementing the sentence VAE on TF-Keras (latest versions). The custom function below calculates the VAE loss from sparse categorical outputs. def vae_loss(encoder_inputs, ...
HMUNACHI's user avatar
2 votes
2 answers
347 views

Keras autoencoder model for detect anomaly in text

I am trying to create an autoencoder that is capable of finding anomalies in text sequences: X_train_pada_seq.shape (28840, 999) I want to use a layer Embedding. Here is my model: encoder_inputs = ...
MRL's user avatar
  • 43
0 votes
1 answer
226 views

Where are the hidden layers?

I am a bit new to the autoencoder. I have this code from Keras (https://blog.keras.io/building-autoencoders-in-keras.html). I wonder that my comments in code here correct? input_img = keras.Input(...
Dammio's user avatar
  • 1,115
0 votes
1 answer
1k views

What type of Autoencoder for text similarity?

I don't have any experience working on neural networks before,so any help would be highly appreciated. I am solving the following task: I want to find the similarity score between sentence pairs. My ...
std's user avatar
  • 71
1 vote
1 answer
153 views

Can I train Word2vec using a Stacked Autoencoder with non-linearities?

Every time I read about Word2vec, the embedding is obtained with a very simple Autoencoder: just one hidden layer, linear activation for the initial layer, and softmax for the output layer. My ...
Leevo's user avatar
  • 1,753
0 votes
1 answer
586 views

Does attention improve performances for seq2seq autoencoders?

I'm trying to implement an RNN autoencode and I was wondering if attention would improve my results. My end goal is to build a document similarity search engine, and I'm looking for ways to encode ...
Vazymolo's user avatar
0 votes
1 answer
311 views

Can I use the `tf.contrib.seq2seq.dynamic_decode` to replace the function `tf.nn.dynamic_rnn` in encoder-decoder framework?

Actually, I want to generate sequences just like the thing that Alex Grave's did. I have the implementation of tensorflow. At the same time, I want to try the attention-based seq2seq model to generate ...
Lily.chen's user avatar
  • 119
4 votes
1 answer
2k views

Feature Construction for Text Classification using Autoencoders

Autoencoders can be used to reduce dimensionallity in feature vectors - as far as I understand. In text classification a feature vector is normally constructed via a dictionary - which tends to be ...
beyeran's user avatar
  • 885