Are there any well documented code review processes and tools for Apex code changes?

We have quite a few Apex classes now. Our current process involves far too much copying and pasting, so we would like to be able to track changes to code as well as perform code review on anything that will go out. Are there any development tools similar to Github, Fisheye or Bitbucket specifically for Salesforce development? I’ve seen some posts about using Eclipse extensions, but was wondering if there is an easier way that doesn’t require a specific IDE.

All topic

Tor downtime from multiple processes

I’ve left my Tor relay (run on an AWS EC2 instance, Xenial Xerius Ubuntu) alone for a few days, and it operated for a few days. The relay then started counting downtime, and the “last seen” timestamp on the Tor metrics website said it was last seen a few days ago. I was poking around with the command line, and I got a message with log level warn saying “It looks like another Tor process is running with the same data directory. Waiting 5 seconds to see if it goes away.”. Then a message with log level info appeared saying “tor_lockfile_lock(): Locking “/home/ubuntu/.tor/lock”, then an err message saying “No, it’s still there. Exiting”. Another err message followed saying “set_options(): Bug: acting on config options left us in a broken state. Dying. (on Tor 0.3.3.7 )”, then a final err message saying “Reading config failed—see warnings above.”. I looked at the source code, and I found this code block:

static tor_lockfile_t *lockfile = NULL;

int
try_locking(or_options_t *options, int err_if_locked)
{
  if (lockfile)
    return 0;
  else {
    char *fname = options_get_datadir_fname2_suffix(options, "lock",NULL,NULL);
    int already_locked = 0;
    tor_lockfile_t *lf = tor_lockfile_lock(fname, 0, &already_locked);
    tor_free(fname);
    if (!lf) {
      if (err_if_locked && already_locked) {
        int r;
        log_warn(LD_GENERAL, "It looks like another Tor process is running "
                 "with the same data directory.  Waiting 5 seconds to see "
                 "if it goes away.");
#ifndef WIN32
        sleep(5);
#else
        Sleep(5000);
#endif
        r = try_locking(options, 0);
        if (r<0) {
          log_err(LD_GENERAL, "No, it's still there.  Exiting.");
          exit(0);
        }
        return r;
      }
      return -1;
    }
    lockfile = lf;
    return 0;
  }
}

Here's the Tor metrics page link for my relay: https://metrics.torproject.org/rs.html#details/B169906909476519CF94B87812C231516FBA2D95

What's going on?

All topic

How to kill suspended processes in Windows 10

I’ve found a tons of arma3.exe processes in Task Manager. They all have status “Suspended”. When i’m trying to kill them, i have message “Access Denied”.
I’ve tried to kill them with cmd. Using /f /t but still “Access Denied”. After that i’ve tried psexec with -s to gain a System privilege, but still nothing.
I have Windows 10. Do you have any idea how to kill that processes? And sorry for my english.

All topic

Why does WordPress on 3 load balanced EC2 servers run out of memory/fpm child processes and crash?

The problem:

30-40 minutes after pointing the DNS from our old server towards our new server, all available memory gets used up and our (3) load balanced EC2 instances crash.

To make matters worse, it doesn’t appear that Elastic Beanstalk is terminating instances which have crashed. I think that is because we can only select a single auto scaling trigger and memory usage is not one of the available triggers.

According to Chartbeat, our website seems to get 200-400 concurrent users (Google Analytics Real-Time reports shows 60-80 users).

I should also point out that I have “solved” the issue but installing Varnish on the EC2 instances. With Varnish in place, the servers do not crash and the frontend is very very fast. However, I wanted to know whether or not this was normal behavior for 200 users on 3 load balanced servers. I’m worried that there is something very wrong, or something that could be tweaked.

Spec overview:

On AWS we are using

  • 3 to 6 load balanced and autoscaled t2.large EC2 servers (2 vCPUs and 8gigs memory)
  • managed by Elastic Beanstalk (for the Github integration)
  • a classic load balancer
  • Cloudflare is used for the DNS and SSL termination.
  • Apache 2.4
  • PHP 5.6
  • PHP FPM
  • 64bit Amazon Linux/2.7.1 AMI

The configs for PHP and FPM are below:

  • PHP info
  • PHP and FPM settings

What I have found

I am switching the DNS around 10pm EST when traffic is low (200 users according to Chartbeat and 60 according to GA) to test things and gather info.

After about 30-40 minutes, all memory gets used up. Unfortunately I wasn’t monitoring closely enough to notice whether the memory use steadily increased or if it just spiked. You can see from the image that latency times exploded as well.

enter image description here

At this point I checked the logs and saw that the server reached it’s max_children setting:

[19-Sep-2018 22:50:40] NOTICE: fpm is running, pid 6842
[19-Sep-2018 22:50:40] NOTICE: ready to handle connections
[19-Sep-2018 23:03:21] NOTICE: Reloading in progress ...
[19-Sep-2018 23:03:21] NOTICE: reloading: execvp("php-fpm-5.6", {"php-fpm-5.6"})
[19-Sep-2018 23:03:21] NOTICE: using inherited socket fd=9, "/var/run/php-fpm/php5-fpm.sock"
[19-Sep-2018 23:03:21] NOTICE: using inherited socket fd=9, "/var/run/php-fpm/php5-fpm.sock"
[19-Sep-2018 23:03:21] NOTICE: fpm is running, pid 8293
[19-Sep-2018 23:03:21] NOTICE: ready to handle connections
[19-Sep-2018 23:33:01] WARNING: [pool www] server reached max_children setting (200), consider raising it

I should probably increase the max_children back up to 420 from 200. But I guess I wasn’t realizing what max_children does (it handles each request right? And each page view could request multiple images, css, a php file, JS calls ect ect?).

But I was hoping that 3 EC2 servers would be able to handle this load. Especially considering that the current, older infrastructure (Rackspace) is basically 2 servers: 1 varnish cache and 1 server that serves the frontend of the site. Neither of those servers seem beefier than the new AWS servers, they only have 4 gigs memory. The PHPFPM configs are also much lower on that server:

pm = dynamic
pm.max_children      = 20
pm.start_servers     = 8
pm.min_spare_servers = 5
pm.max_spare_servers = 10

And thats whats crazy to me. How can the old server (plus varnish cache) with lower specs and lower fpm settings handle all this traffic, but my 3 to 6 load balanced EC2 servers cant?

Next Steps

  1. Maybe EC2 servers just really suck compared to the old Rackspace server and I need to choose larger instances?
  2. The RDS database is a big bottleneck, and until I adjusted it’s settings wouldn’t allow more than 40 connections. Maybe I need to use an EC2 server running mysql? (I have another, separate but related question open about this)
  3. Memcache or Redis via elasticache might help so long as I can ensure it doesn’t interfere with the admin section.
  4. Opcache is enabled by default in php5.6 but is there anything else I need to do use it?
  5. Add memory monitoring and additional autoscale triggers to elastic beanstalk

All topic

Group processes in the taskbar, is this possible?

I’m working as a support engineer, and regularly I’m working with several tasks, which imply:

  • One or several Windbg sessions
  • One or several Visual Studio sessions (application or dumps)

In my taskbar, all Windbg sessions are grouped together, and all Visual Studio sessions are grouped together, and the titles are not always very clear (like (Debugging), Source not found (Debugging) for Visual Studio or Dump C: for Windbg.

I would like to create a group of processes, give it a name, and in there, put all processes, relevant for that one group, e.g.:
I’m working on two memory dumps for ticket DCP_12345, and I’ll need to open two Windbg sessions (one for every dump) and two Visual Studio sessions (one for a dump, and one for the corresponding application), so I create a group, I call it DCP_12345, and after having started the mentioned processes, I add them to that group. Like this, my taskbar does not look like an ununderstandable pile of Windbg sessions, Visual Studio sessions, …, but it consists of different groups, called by the corresponding ticket number, which makes it for me very easy to follow what I’m currently working on.

Is there such a feature in Windows-10? Or a plugin, program, …, to achieve this?
Thanks in advance

All topic

Is it possible to restrict clipboard access to specific processes?

I’m not sure if somebody has implemented this before, but I think it’d be helpful if we can make certain clipboard messages only available to specific processes if they are not meant to be public.

Is this technique already available on some platforms, or an ongoing work, or deemed unnecessary in favor of other IPC techniques (and if the latter what they are)?

All topic

Python – run two consecutive and dependent background processes using Queue

I want to spawn two consecutive background processes, triggered by user inputs, like so:

if request.method == 'POST':
    if 'one' in request.form:
        cache_one() # background process #1

if request.method == 'POST':
    if 'two' in request.form:
        cache_two() background process #2

What is the best way of doing this using Queues in this fashion:

q = queues.Queue()
q.put(cache_one())
q.put(cache_two())

in such a way that process #2 starts only after process #1 is finished?

All topic

How to limit the available bandwidth for specific processes on Windows 8.1 Pro [migrated]

So, I wish to limit the upload speed of specific applications which lack a built-in control for it and at the same time I’m trying to find a solution that’s as organic and long-lasting as possible.

With a bit of searching I found quite a list of opensource bandwidth control software, all of it for linux, and at the same time I’m having trouble finding anything other than free-trial and paywalled-functionality applications for windows, which I do not wish to bother with.

I also stumbled upon some info about it being possible with the use of group policies but not much detail on how to achieve it.
The computer runs windows 8.1 pro.


tl;dr: To sum it up, I am trying to limit the maximum upload rate for specific applications by either using opensource software or group policies or by other means, as long as they aren’t commercial/demo/free-trial or similar type of software.

All topic

Bash, display processes in specific folde

I need to display processes, that are running in specific folder.
For example, there are folders “TEST” and “RUN“. 3 sql files are running from TEST, and 2 from RUN. So when i use command ps xa, i can see all processes, runned from TEST and RUN together. What i want is to see procceses, runned only from TEST folder, so only 3. Any commands, solutions to do this? Thanks in advance!

All topic

WordPress on 3 load balanced EC2 servers run out of memory/fpm child processes and crash

I recently migrated a WordPress website I inherited over to AWS from Rackspace and am experiencing large performance decreases. I’m pretty new to the whole DevOps thing so I’m not sure where to start looking.

The problem:

30-40 minutes after pointing the DNS from our old server towards our new server, all available memory gets used up and our (3) EC2 instances crash.

To make matters worse, it doesn’t appear that Elastic Beanstalk is terminating instances which have crashed. I think that is because we can only select a single auto scaling trigger and memory usage is not one of the available triggers.

According to Chartbeat, our website seems to get 200-400 concurrent users (Google Analytics Real-Time reports shows 60-80 users).

I think the database (RDS) is also a big bottleneck, the WordPress backend slows to a crawl (15+ seconds to load the “new post” page)

Spec overview:

On AWS we are using 3 to 6 load balanced/autoscaled t2.large EC2 servers (2 vCPUs and 8gigs memory), managed by Elastic Beanstalk (for the Github integration) and a classic load balancer, Cloudflare is used for the DNS and SSL termination.

We are using Apache 2.4 with PHP 5.6, and PHP-FPM. The configs for FPM are below:

/etc/php-fpm-5.6.d/www.conf

pm = dynamic
pm.max_children = 430
pm.start_servers = 129
pm.min_spare_servers = 86
pm.max_spare_servers = 172
pm.max_requests = 500
pm.process_idle_timeout = 10s
ping.path = /ping
pm.status_path = /status

Those numbers are automatically populated by this script. I have since adjusted those settings to something smaller:

pm = ondemand
pm.max_children = 200
pm.start_servers = 10
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_requests = 500
pm.process_idle_timeout = 10s
ping.path = /ping
pm.status_path = /status

/etc/httpd/conf.d/mpm.conf


StartServers 3
ServerLimit 16
MaxRequestWorkers 128
MinSpareThreads 75
MaxSpareThreads 150
ThreadLimit 32
ThreadsPerChild 32
MaxConnectionsPerChild 0

What I have found

I am switching the DNS around 10pm EST when traffic is low (200 users according to Chartbeat and 60 according to GA) to test things and gather info.

After about 30-40 minutes, all memory gets used up. Unfortunately I wasn’t monitoring closely enough to notice whether the memory use steadily increased or if it just spiked. You can see from the image that latency times exploded as well.

enter image description here

At this point I checked the logs and saw that the server reached it’s max_children setting:

[19-Sep-2018 22:50:40] NOTICE: fpm is running, pid 6842
[19-Sep-2018 22:50:40] NOTICE: ready to handle connections
[19-Sep-2018 23:03:21] NOTICE: Reloading in progress ...
[19-Sep-2018 23:03:21] NOTICE: reloading: execvp("php-fpm-5.6", {"php-fpm-5.6"})
[19-Sep-2018 23:03:21] NOTICE: using inherited socket fd=9, "/var/run/php-fpm/php5-fpm.sock"
[19-Sep-2018 23:03:21] NOTICE: using inherited socket fd=9, "/var/run/php-fpm/php5-fpm.sock"
[19-Sep-2018 23:03:21] NOTICE: fpm is running, pid 8293
[19-Sep-2018 23:03:21] NOTICE: ready to handle connections
[19-Sep-2018 23:33:01] WARNING: [pool www] server reached max_children setting (200), consider raising it

I should probably increase the max_children back up to 420 from 200. But I guess I wasn’t realizing what max_children does (it handles each request right? And each page view could request multiple images, css, a php file, JS calls ect ect?).

But I was hoping that 3 EC2 servers would be able to handle this load. Especially considering that the current, older infrastructure (Rackspace) is basically 2 servers: 1 varnish cache and 1 server that serves the frontend of the site. Neither of those servers seem beefier than the new AWS servers, they only have 4 gigs memory. The PHPFPM configs are also much lower on that server:

pm = dynamic
pm.max_children      = 20
pm.start_servers     = 8
pm.min_spare_servers = 5
pm.max_spare_servers = 10

And thats whats crazy to me. How can the old server (plus varnish cache) with lower specs and lower fpm settings handle all this traffic, but my 3 to 6 load balanced EC2 servers cant?

Next Steps

  1. Unlike the old server, I don’t have Varnish running on these new
    servers because it would have to be installed on each instance which
    makes clearing the cache difficult right? But maybe I need varnish.
  2. Related to (1) should I use Cloudfront as well as Varnish? We are still using Cloudflare which provides a CDN service.
  3. Maybe EC2 servers just really suck compared to the old Rackspace server and I need to choose larger instances?
  4. The RDS database is a big bottleneck, and until I adjusted it’s settings wouldn’t allow more than 40 connections. Maybe I need to use an EC2 server running mysql?
  5. Memcache or Redis via elasticache might help so long as I can ensure it doesn’t interfere with the admin section.
  6. Opcache is enabled by default in php5.6 but is there anything else I need to do use it?
  7. Add memory monitoring and additional autoscale triggers to elastic beanstalk
  8. Find a service to replicate the 200 concurrent users traffic for 40 minutes os I can actually test during the day.

All topic