Luxury market vs large consumer market

What is more profitable companies, which target the luxury market or which target the large consumer market.

For exemple Rolls-Royce vs nissan Or Louis Vuitton vs Zara ?

I do some search through Forbes, and found that luxury markets are more profitable, but i want more specific accurate theoretical response

All topic

Manually convert transactions list to market basket format

What I am trying to do is edit a transactions list stored as a CSV file to something that can be used by the “arules” package in R. But I also want to keep it as a data frame so I can export it to a different CSV file. So I started with a simple data set:

Fruit   Milk    Eggs
yes   yes     no
no    no      yes
no    yes     yes
yes   yes     yes

It needs to look like this:

Fruit   Milk    
        Milk    Eggs
Fruit   Milk    Eggs

So, I read in the CSV and get the column names:

df1 <- read.csv('basket_test.csv')
l <- c()
#create list with item names
for(i in 1:3){
  l <- append(l,names(df1)[i])

Here's where I'm running into a problem, R sees categorical data, and It complains when I try to change it:

#replace "yes" with item name
for(x in 1:3){
  for(y in 1:4){
      df1[y,x] <- l[x]

It gave me this error:

invalid factor level, NA generatedinvalid factor level, NA generatedinvalid factor level, NA generatedinvalid factor level, NA generatedinvalid factor level, NA generatedinvalid factor level, NA generatedinvalid factor level, NA generatedinvalid factor level, NA generated

And the data frame now looks like this:

  Fruit Milk Eggs
1      no
2    no  no  
3   no   

I tried as.character on the data frame cells by iterating through each one, then attempting the routine again, but that did not work. So, what do I do to my data frame in order to change the values within it?


All topic

Neural Network Predicting Live Market Data (fun project for BTC prediction)

Made it just for fun – not for profit, wrote a neural network application that is predicting output from live data from exchange markets dealing with Bitcoin.
Now just to clarify, i am not asking if my algo does it correclty or my model is going to make me rich – i am studying NN and live prediction, so please read it that way.

There are two sources (markets) from which i get real data.
The data i am considering as input is obviously current buy price, and the network is trying to guess next price. However i don’t care about timing here, i want to predict next possible price so i am not considering a buy price that has not changed as an input. I poll market every 100ms and ask for a current price, if price has changed then i store it, if price did not change i ignore it.

I am training the network by feeding in historical prices, around 2k for each market – network configured as follows:

INPUT:3 inputs

Training until error reaches 0.001 factor.

Now to the questions.

1) I am storing only values that change, so i dont save the price if it hasn’t changed, therefore – is this approach ok? Or should i get the price even if it doesn’t change? Does this affect the prediction? And how much? I don’t want to predict a value at 15:00 i want the network to predict next possible buy price – time does not matter here.

2) If you look at the charts below, you can clearly see that the network is kind of ‘lagged’ (especially on the second screenshot) and it doesn’t like ‘high peaks’ – what’s even better, it can’t even predict these it always predicts the opposite trend – is this something that is normal or there is some explanation for this behaviour?

enter image description hereenter image description here

Source code:

#include "Core/CMemTracer.h"
#include "Core/CDatabase.h"
#include "Core/CCalcModule.h"
#include "Core/CCalcModuleNN.h"
#include "Core/CNeuralNetwork.h"

CNeuralNetwork _NeuralNetwork;
CDatabase _Database;

int main(int argc, const char * argv[])
    std::string m_strDatabaseHost;
    std::string m_strDatabaseName;
    std::string m_strDatabaseUsername;
    std::string m_strDatabasePassword;
    std::string m_strExchange;

    int          m_iNumOfHistoryForTraining = 0;
    int         iNeuralNetworkInputs = 5;
    int         iNeuralNetworkHidden = 2 * iNeuralNetworkInputs + 1;
    int         iNeuralNetworkOutputs = 1;
    int         iMaximumTrainingEpoch = 10000000;
    float       fMinimum = 0;
    float       fMaximum = 1000;
    float       fMaximumNetworkError = 0.000720;
    float       fNeuralNetworkLearningRate = 0.5;
    float       fNeuralNetworkMomentum = 0.1;

    std::vector vHistory;
    std::vector vNormalisedData;

    m_strDatabaseHost       = "";
    m_strDatabaseName       = "Trader";
    m_strDatabasePassword   = "password";
    m_strDatabaseUsername   = "root";
    m_strExchange           = "exBitMarket";

    // How much data we fetch from the DB
    m_iNumOfHistoryForTraining = 2000;

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Connecting to Database");

    // Load up Database
    if(_Database.Connect(m_strDatabaseUsername, m_strDatabasePassword, m_strDatabaseHost) == false)
        CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Error, cant connect to Database");
        return false;

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Selecting Database");

    // Select Database
    if(_Database.SelectDatabase(m_strDatabaseName) == false)
        CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Error, cant select Database");
        return false;

    // Get x Data from Database
    std::string strQuery = "SELECT * FROM (SELECT * FROM exData WHERE Exchange='"+m_strExchange+"' ORDER BY Epoch DESC LIMIT "+stringify(m_iNumOfHistoryForTraining)+")sub ORDER BY Epoch ASC";

    // Query DB
    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Querying database");

    CDatabase::tDatabaseQueryResult _QuerySelect;
    if(_Database.Query(strQuery, _QuerySelect) == false)
        CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Error, cannot query database");

        return false;

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Got %i results", _QuerySelect.m_iRows);

    // If Data available
    if(_QuerySelect.m_iRows >= m_iNumOfHistoryForTraining )

        // Push back Buy value to Historical Data Vector
        for(int c = 0; c < _QuerySelect.m_vRows.size(); c++)

        vNormalisedData = vHistory;
        CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Error, not enough data returned (%i of %i required)", _QuerySelect.m_iRows,m_iNumOfHistoryForTraining);

        return false;

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Normalising data for Neural network input");

    // Normalise
    // Find max, min values from the dataset for later normalization
    std::vector::iterator itMax = std::max_element(vHistory.begin(), vHistory.end(),[](const float& x, const float& y) {  return x < y; });
    std::vector::iterator itMin = std::min_element(vHistory.begin(), vHistory.end(),[](const float& x, const float& y) {  return x < y; });

    // Store Min/Max
    fMinimum = itMin[0];
    fMaximum = itMax[0];

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Normalised data <%f, %f>", fMinimum, fMaximum);

    // Important - Neural Network has to be setup correctly for activation function
    // both this normalization and NN has to be setup the same way.
    // Log  sigmoid activation function (0,1)

    // logistic sigmoid function  [0, 1]
    for(int a = 0; a < vHistory.size(); a++)
        vNormalisedData[a] = (vHistory[a] - itMin[0]) / (itMax[0] - itMin[0]);

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, Initializing neural network with the setup %i/%i/%i Learning Rate: %f, Momentum: %f",

    // Build the network with arguments passed
    _NeuralNetwork.Initialize(iNeuralNetworkInputs, iNeuralNetworkHidden, iNeuralNetworkOutputs);
    _NeuralNetwork.SetMomentum(false, fNeuralNetworkMomentum);

    // Train
    double  dMaxError   = 100.0;
    double  dLastError  = 12345.0;
    int     iEpoch      = 0;
    int     iLastDump   = 0;
    int     iNumberOfDataForTraining =  (vNormalisedData.size() / 2) - iNeuralNetworkInputs + iNeuralNetworkOutputs;
    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, starting training with %i data out of %i", iNumberOfDataForTraining, vNormalisedData.size());

    // Perform training on the training data
    while ( (dMaxError > fMaximumNetworkError) && (iEpoch < iMaximumTrainingEpoch) )
        dMaxError = 0;

        // Now the input is normalized and ready for use perform the training
        // Use 1/2 of the Normalised Data for training purposes, the rest will be used to
        // Validate the network.
        for(int a = 0; a < iNumberOfDataForTraining; a++)
            // Set Inputs
            for(int b = 0; b < iNeuralNetworkInputs; b++)
                _NeuralNetwork.SetInput(b, vNormalisedData[a+b]);

            // Set desired Output for the newest value
            _NeuralNetwork.SetDesiredOutput(0, vNormalisedData[a + iNeuralNetworkInputs]);

            // Feed data

            dMaxError += _NeuralNetwork.CalculateError();

            // Backpropagate to learn

        // Divide by the number of total array size to get global network error
        dMaxError /= vNormalisedData.size();

        // Dump some stats now
        if(CUtils::GetEpoch() - iLastDump > 1)
            CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Training Error Factor: %f / %f Epoch: %i", dMaxError, fMaximumNetworkError, iEpoch);
            iLastDump = CUtils::GetEpoch();

        // Increment the epoch count

        // Store last error for early-stop
        dLastError = dMaxError;
    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "Info, starting validation with %i data", vNormalisedData.size() - iNumberOfDataForTraining);

    dMaxError = 0;

    // Now check against 'Validation' Data
    for(int a = iNumberOfDataForTraining; a < vNormalisedData.size(); a++)
        // Set Inputs
        for(int b = 0; b < iNeuralNetworkInputs; b++)
            _NeuralNetwork.SetInput(b, vNormalisedData[a+b]);

        // Set desired Output for the newest value
        _NeuralNetwork.SetDesiredOutput(0, vNormalisedData[a + iNeuralNetworkInputs]);

        // Feed data

        dMaxError += _NeuralNetwork.CalculateError();

    // Divide by the number of total array size to get global network error
    dMaxError /= vNormalisedData.size();

    CLogger::Instance()->Write(XLOGEVENT_LOCATION, "%i Network Trained, Error Factor on Validation data = %f",

    // Save the network to an output filer

    return 0;

Not asking about the algo, just asking about the output from the network, does it happen like that is this normal, or does it look like the network is overfitted?

Added updated code that reflects training on Training data and a Validation on validation data.

All topic

Is the Moving Average of ARMA the same of Moving Average of Stock Market?

I’m studying time series prediction and I have some questions.

Is the Moving Averages movel studied the methods of the ARMA family has the same concept as the methods studied in Moving Averages technical analysis stock trading?

Is there any place where I can find implementations in .Net or Java of time series forecasting with Auto-Regressive AR(p), Moving Average MA(q), ARMA(p, q) and ARIMA (p, d, q)?

All topic

OTM puts or ITM puts in fear of market crash

If I own some shares, but at the same time I am afraid of a market crash, should I buy OTM puts or ITM puts?

All topic

Which Stream Processing Systems should we use to replay market events?

We are consuming financial quotes from different exchanges and we want to have a possibility to replay them. Currently, we don’t save them in our storages. We want to save them either to a database or to a file and replay later on. When we replay, we want to preserve delay between data, i.e. simulate real events with real speed in the past. Should we use any Stream Processing Systems to simply achieve this given the fact that we might run some analytics in the future? Which Streaming Processing System would you recommend for this task if makes a sense? We are java house.


All topic

What happens to ETF distributions during a market crash’?

Every month I invest a percentage of my income in three ETF’s. Two provide distributions and income to me every three months. My thoughts are:

  • The distributions come from dividends from the companies in the ETF

  • Companies paying dividends usually have a solid business plan which allows them to do so even during recessions

  • These ETFs are MSCI EM, Euro Stoxx 600 and they contain mainly large-cap companies

  • The distributions should continue during a crisis

What will happen to these distributions if a financial crisis occurs?

All topic

How can I gauge market demand for my skills before going freelance? [on hold]

I am a Mechanical Engineer, currently employed full-time for a large multinational. I am considering the option of leaving my full-time job to do similar work freelance, for a number of reasons that I won’t go into here in detail.

So, I am wondering how I can gauge how much demand there will be for my skills in the market, before I take the plunge and quit my current job. I have a family to support and mortgage to pay, so I need to have a good level of confidence that I can earn at least as much as I am now, before I could make the jump.

Is it realistic to expect that I could earn more working freelance than I am currently full-time?

I have some academic background in fluid dynamics and what I really want to do is develop into being a specialist in fluid dynamics/CFD.

All topic

Menapay Token ICO in Market?

Menapay is a Blockchain-based installment portal focused to the Middle East and Africa and entirely conceived from the longing to battle and also put a stop to the various difficulties looked by the general population in the Middle East and Africa and thusly. Facilitate their life as to keep the money.

It is any way the plain first stage of its kind in the Middle East and African Economy.

Characteristics of Menapay

Menapay is extraordinary from different stages of its kind since it gives a dependable installment portal that is straightforward and secure concerning our ordinary managing an account exchanges.

All topic

Comparison of normalization methods on market returns

I am looking to use a multi-factor model to make target-return predictions. Since the factor-returns come from different scales I need to normalize first.

There are different ways to normalize returns, to mention a few: subtract mean and divide by standard deviation(assumes non-zero drift), simply divide by standard deviation, divide by euclidian norm, divide by the standard deviation of highs/lows.

My question: is there a documented comparison of the different methods, on noisy time-series data such as market returns, stating the pros/cons of different methods? Empirically do you have any suggestions and remarks?

All topic