## Question: Do you hang out with co workers? I like to keep to myself.?

I work with this guy sometimes on weekends. I’ve hung out with him a couple times. Just at another co workers house drinking some beer. I’ve been working with him for over a year now. Sometimes he will text me and sometimes I respond. This guy will text almost every day. For some reason he’s told me he considers me a friend. Yet we don’t really hang out. Just work together allot. We may text about football or work. That doesn’t mean we are friends. I know he doesn’t get out much. I myself don’t have many friends and I’ve always been ok not having many. I still get out and be social at bars. This guy has never been to the bar. He prefers to go to someones place and drink. I find that boring. He is 48 with 4 kids and I know he got divorced a few years ago. I’m 38 with no kids and never been married. He told me his idea of going is a drive. He’s not like me at all. I tried going to another co workers house and drink beer. It was just boring for me after a while. I prefer to go to the bar and drink beer and be social with people. I never drink at home or to someone’s place. Unless its a party with some cute women. This texting can be too much what he does. I didn’t mind it at first but it can be almost every day. Like he’s my girlfriend. I do think its time to stop responding. Maybe this guy is lonely.

## How to perform workers job asynchronously even after redis stops working

I am working on creating objects using sidekiq asynchronously in RubyOnRails.

``````class CreateLeadWorker
include Sidekiq::Worker

end

end
``````

Above worker will be called from the after save callback of lead object as

``````CreateLeadWorker.perform_async(lead_attributes)
``````

Above will work until redis server is active.

In case if redis stops working, then also how we can perform creating lead worker asynchronously without raising exception in applications working?

P.S Don’t want to use synchronous request to perform work.

## Question: When hard workers talk about how hard they work, do you feel lazy, or glad you don’t work that…

Example: if someone talked about how they worked at a job all day everyday, would you feel happy that you don’t work as hard as them, or do you feel like a very lazy person? I’m in college, and when my college friends say that they work 8 hours after school, I kind of feel bad for not having a job. That’s why I’m wanting to know how y’all feel.

## directive ‘catch_workers_output = yes’ don’t work as I want

I have CentOS 7 with apache 2.4.35 with pmp event and php-fpm

all works fine but all php errors are go to one big file for all my virtual hosts.

``````catch_workers_output = yes
``````

might be help me. I think that this directive send all php errors to apache log file but there is no luck.

I don’t really know what I should now.

here is my php.conf

``````
# we must declare a parameter in here (doesn't matter which) or it'll not register the proxy ahead of time
ProxySet disablereuse=off

# Redirect to the proxy

SetHandler proxy:fcgi://php-fpm

#
# Allow php to handle Multiviews
#
#
# Add index.php to the list of files that will be served as directory
# indexes.
#
DirectoryIndex index.php

ProxyErrorOverride on
``````

p.s.
I think that individual pool per site will solve the problem but I think that it get really more memory, because every pool need to create empty slots to start serving…

## Unicorn workers timing out intermittently

I’m getting intermittent timeouts from unicorn workers for seemingly no reason, and I’d like some help to debug the actual problem. It’s worse because it works about 10 – 20 requests then 1 will timeout, then another 10 – 20 requests and the same thing will happen again.

I’ve created a dev environment to illustrate this particular problem, so there is NO traffic except mine.

The stack is Ubuntu 14.04, Rails 3.2.21, PostgreSQL 9.3.4, Unicorn 4.8.3, Nginx 1.6.2.

The Problem

I’ll describe in detail the time that it doesn’t work.

I request a url through the browser.

``````Started GET "/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z" for 127.0.0.1 at 2014-12-30 15:58:59 +0000
Completed 200 OK in 10.3ms (Views: 0.0ms | ActiveRecord: 2.1ms)
``````

As you can see, the request completed successfully with a 200 response status in just 10.3ms.

However, the browser hangs for about 30 seconds and Unicorn kills the worker:

``````E, [2014-12-30T15:59:30.267605 #13678] ERROR -- : worker=0 PID:14594 timeout (31s > 30s), killing
E, [2014-12-30T15:59:30.279000 #13678] ERROR -- : reaped # worker=0
I, [2014-12-30T15:59:30.355085 #23533]  INFO -- : worker=0 ready
``````

And the following error in the Nginx logs:

``````2014/12/30 15:59:30 [error] 23463#0: *27 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z HTTP/1.1", upstream: "http://unix:/app/shared/tmp/sockets/unicorn.sock:/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z", host: "localhost", referrer: "http://localhost/offers.xml?q%5bupdated_at_greater_than_or_equal_to%5d=2014-12-28T18:01:16Z&q%5bupdated_at_less_than_or_equal_to%5d=2014-12-28T19:30:21Z"
``````

Again. There’s no load on the server at all. The only requests going through are my own and every 10 – 20 requests at random have this same problem.

It doesn’t look like unicorn is eating memory at all. I know this because I’m using `watch -n 0.5 free -m` and this is the result.

``````             total       used       free     shared    buffers     cached
Mem:          1995        765       1229          0         94        405
-/+ buffers/cache:        264       1730
Swap:          511          0        511
``````

So the server isn’t running out of memory.

Is there anything further I can do to debug this issue? or any insight into what’s happening?

## What are the effects of the HR392 (fairness for skilled immigrant workers) bill?

Recently, there is a lengthy discussion about HR392 (fairness for skilled immigrant workers) in the US house and senate. It means if this bill will be passed by the house and senate, the limitation of green card for immigrants based on their birth country will be lifted and whoever came first to the US will be served first. As a result, due to huge backlog of Indian and Chinese immigrants waiting for their green card, it will take 10 years for other foreign citizens to be able to apply for adjustment of status.

My question is, if it will be passed, is it really putting the rest of the world behind the Indian and Chinese people in the green card backlog?

## Assign tasks to workers efficiently

Two parts to my question:

1. Say I have a set of twenty tasks and two workers, and the goal is to get as few workers to complete as many tasks as possible.

```{python3} import numpy as np np.random.seed(4) tasks = np.random.rand(20,4) ```

Right now my process is: (a) enumerate all possible combinations of tasks, (b) for each combination: optimize workers to complete tasks (if possible), (c) run until optimization fails and all tasks are assigned to either worker A or worker B.

If a worker can complete a task `[1,1,1,1]` then that worker can complete any given task since all other tasks would be less than each entry. If a worker can complete a task `[0,1,1,1]` it could not complete any task with $$value > 0$$ in $$0^{th}$$ entry.

Obviously in the actual problem there are many constraints that do not allow a worker to get to complete all tasks all the time.

The workers can be optimized to meet the needs of the tasks, unless the worker cannot meet the constraints and the optimization procedure fails.

My question is, how can I build this process so that I don’t have to check every combination of tasks all the time… `[0], [1], ..., [0,1,...,20]`, and quickly/efficiently assign the tasks to the workers?

I have tried a genetic algorithm and this takes a very long time. I am open to methods that will cut the time significantly and given a generally optimal solution.

## I am looking for serious workers to work in my home here in LONDON

Hello interested applicant, I seek your permission to post on this page.Job opportunity in LONDON… I am looking for serious workers to work in my home here in LONDON.. Like an Au Pair/Nanny, Babysitter, Chef, Driver, Security Guard, Office Cleaner, Domestic Workers, Housekeeper & Governess…Host Family is responsible for (,flying ticket,food and accommodation free)if you are interested in work with anyone email me or send your CV to (gracesmithluv772@gmail.com )