Use Azure Active Directory as RADIUS server for VPN gateway?

I’m using Azure Active Directory (Premium, with full MFA). I’ve set up a VPN gateway and would like users to be able to authenticate to it using their Azure AD username and password (instead of certificates).

From everything I read, this should be possible – Azure MFA provides a RADIUS server, and the Azure VPN Gateway can connect to a RADIUS server.

But I can’t figure out how to do fit – in the gateway’s P2S configuration, I need to provide an IP address and a secret. Where would I get those for the Azure AD MFA server?

enter image description here

Remote Access to Owncloud Server

I’m currently trying to setup my own own-cloud server, and I’ve got it fully installed, configured, and accessible from within my own local network. I cannot figure out how to access it from the outside. So far I’ve:

  • Successfully setup port-forwarding on my local router.
    • I’ve done so via ‘single port forwarding’ and ‘port range forwarding’
    • Ports 80, 443, 3306 (Apache-Full and MySQL)
  • Successfully obtained my external IP address.
    • I’ve also tested this magic number from within the network at #insertIPhere/owncloud and it did work.
  • Successfully setup the server using SQLite
  • Successfully setup the server using MySQL
  • Created the following exceptions in my firewall:
    • Allow In Port 80 (Apache Full)
    • Allow In Port 443 (Apache Full)
    • Allow In Port 3306 (MySQL)
  • Tried connecting from several different remote networks, as to troubleshoot something on their end

As far as trying to access it, I’m doing so through Google-Chrome and Mozilla Firefox trying to reach the server through #insertIPhere/owncloud using the above public IP address.

So what have I missed, and how do I access my server from outside?

Thanks in advance for your help and time, and I apologize in advance for what will probably result in my noobish mistake in networking.

I’ve looked at the official documentation. And also this question here.

Many troubles trying to mount QNAP NFS server on Ubuntu mount point

I have a QNAP TS-251 NAS with a pair of 4 GB drives set up as RAID 1, running as a server, mainly for backups. It has the default operating system, QTS 4.2.0 (2016/01/19), which I think is current. It’s a customized flavor of Linux with a GUI interface that I run from my Windows desktop. It has IP number 169.254.100.101 on its Ethernet #1 port and 169.254.100.102 on its Ethernet #2 port.

As a client, I have an old Dell (Core 2 Duo T8100) laptop running Ubuntu 14.04 LTS. It’s connected to the QNAP’s #2 Ethernet port, and its IP number is 169.254.100.99.

Note: From here I’ll include background details that can probably be skipped (but answer “Why would you do this?” questions I anticipate) in [brackets].

There’s also a Windows desktop connected to the QNAP’s #1 Ethernet port, which is involved with this only to run the QNAP’s GUI. I can also use PuTTY from it to the QNAP if I need a command line

[The QNAP comes with several default shared folders: Download, Multimedia, Public, Recordings, Web, homes.]

I created a shared folder named CrashPlan on the QNAP server, and a mount point on the Ubuntu client named /mnt/QNAP-CrashPlan. I installed NFS client packages on the client with sudo apt-get install portmap nfs-client [and installed autofs with sudo apt-get install autofs in an unsuccessful attempt to diagnose problems].

Following advice in this question, I gave NFS access rights, host/IP/network 169.254.*, permission read/write, and squash option NO_ROOT_SQUASH. The anonymous boxes remained grayed out.

So, in the moment of truth, from the client, I tried sudo mount 169.254.100.102:/CrashPlan /mnt/QNAP-CrashPlan and sudo mount -t nfs 169.254.100.102:/CrashPlan /mnt/QNAP-CrashPlan, and got mount.nfs: Connection timed out in both cases.

In an attempt to diagnose the problem, I tried showmount -e 169.254.100.102, but it replied clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused).

I’ve searched a lot and tried a lot, but I haven’t found any further paths to diagnose the problem. Any ideas?

I will edit in more details as needed. Also, this might deserve a “qnap” tag, but I don’t have create tag permission.

[XY problem details: The reason I named the shared folder that I created CrashPlan is that I tried to run the QNAP as a CrashPlan server. I wasn’t able to get that to work except by mounting the CrashPlan folder as a Windows drive, using net use to mount it, and running a separate user-mode client instance of CrashPlan on Windows, because Windows doesn’t allow services access to net use drives. Running the CrashPlan server on my old Ubuntu laptop with the QNAP mounted through NFS was described as a configuration that avoided that issue.]

No network interfaces – Ubuntu Server 18.04

So, I had a fresh install of Ubuntu Server 18.04,everything worked fine till I reboot it today. Now there’s no network interfaces except the loopback(lo). if I do ‘ifconfig -a’ I see my other network interfaces. Looked around online and there’s posts about using ‘Netplan’, which is what I had used to give my server a static IP address. the .yaml config is still there and everything.

I can’t run ‘netplan apply’ as it says the command isn’t found. Anything else I find is about older version of Ubuntu Server, that I did try(editing /etc/network/interfaces/, doing another .yaml file)

I’m at a loss since nothing else I search for helps me, it ends up being the same things. Does anyone have any suggestions?

Outlook Cached mode is not updating user’s email changes back to the server

I have a user having problems with Outlook Cached mode. It seems that changes she makes to her inbox are not being made server side.

For example, if an email is deleted or moved to a different folder then according to the server that email is still in the inbox.

This first came to light when a new PC was provisioned for this employee and they logged into Outlook. The inbox had literally thousands of emails in, all of which had been handled and cleared away previously.

New messages are arriving without any problems, although occasionally we have to click the “Update This Folder” button within Outlook. This aspect seems more of a problem on the new Outlook 2010 install than it was under Outlook 2003 on the old machine.

Other PC’s in the office seem to be fine. I’m checking this by looking at the Synchronization tab under the Inbox properties and checking the “Server folder contains” and “Offline folder contains” figures match.

I’d welcome suggestions of how to correct this.

Lidgren not starting server properly

I have a server set up for my game with Lidgren, and it works well when I have it set up on one machine, but if I try to run the server on another machine, I am unable to connect from other machines. I checked my port when running the server, and it says it’s closed, despite the lidgren server running. I set the NetPeerConfiguration localaddress to be my ipv4 address and made sure my router was forwarded at this ipv4 address at the port I was running the server on, but still no luck. Here is the relevant server code-

NetPeerConfiguration config = new NetPeerConfiguration("Platformer");
IPAddress[] ipv4Addresses = Array.FindAll(Dns.GetHostEntry(string.Empty).AddressList, a => a.AddressFamily == AddressFamily.InterNetwork);
config.LocalAddress = ipv4Addresses[0];
config.Port = 8888;
config.EnableMessageType(NetIncomingMessageType.ConnectionApproval);
NetServer server = new NetServer(config);
server.Start();

The server says it’s being started at the local ip with the right port, and everything is forwarded correctly on the router. What is going wrong here?

How do I apply a manifest from a Puppet Master server to a Puppet Agent node server?

I installed Puppet Agent on a CentOS 7 server. I installed Puppet Master on a different CentOS 7 server. I’m using the free version of Puppet on both servers. The Puppet Agent server requested a certificate. I signed it on the Puppet Master server. There is no software firewall on either server. There is no firewall between the servers. I temporarily enabled port 8140 with SSHD. I used SSH to verify port 8140 was open. I then reverted SSHD to only listen on port 22. Port 8140 wasn’t blocked. nslookup on the IP addresses and ping against the domain names shows that both servers have correct networking information about the other server. These are new servers. Puppet has never worked on them before. I created a simple manifest. I applied it to the Puppet Master server locally. It worked without errors. I then tried to apply it to the Puppet Agent server. It didn’t work.

From the Puppet Master server, I ran this:

puppet agent neat.pp –server hostNameOfPuppetAgentServer –verbose

There were no errors on the Puppet Master server after I pressed enter. There was no output at all. This produced no logs on the Puppet Agent server. There was no evidence it worked on the Puppet Agent server. I checked, and the effects were not performed.

On the Puppet Master server I ran this:

puppet agent –server hostNameOfPuppetAgentServer –test

The results included “Connection refused — connect (2).” What is wrong? I expect the manifest to work on a Puppet Agent node.

NFS server seems to kick diskless nodes off

I’m currently helping setting up a lab that will use diskless nodes for some MPI and CUDA computing.

The distribution of choice is CentOS 7.
To set up the diskless nodes I’ve followed the guide here.

I got to boot a diskless node successfully and even run some MPI test programs.
So everything works fine in terms of connectivity,firewalls,nfs exports etc.

The problem is that after ~12 hours of having booted the diskless node, the main server which acts as a dhcp,tftp and nfs server seems to kick of the diskless node from the nfs service which results with the kernel: nfs: server not responding, still trying message appearing on the client.
At that point I also stop getting ping replies from the diskless clients.
Since the client has its root fs obtained by NFS I guess this leaves the client at a “corrupted” state only allowing me to reboot with Ctrl+Alt+Del or the machine’s reset switch. No matter how much times passes the client won’t connect back.
Inspecting /var/log/messages on the main server I got this interesting in my opinion line:
Oct 8 23:30:50 myhostname kernel: NFSD: purging unused client (clientid e87d62f6).

Here is a bigger part of the log:
Oct 8 23:30:17 myhostname kernel: nfsv4 compound op ffff885c713d4080 opcnt 4 #3: 3: status 0
Oct 8 23:30:17 myhostname kernel: nfsv4 compound op #4/4: 9 (OP_GETATTR)
Oct 8 23:30:17 myhostname kernel: nfsd: fh_verify(36: 01070001 00260308 00000000 996a1153 334c49c8 b8768c81)
Oct 8 23:30:17 myhostname kernel: nfsv4 compound op ffff885c713d4080 opcnt 4 #4: 9: status 0
Oct 8 23:30:17 myhostname kernel: nfsv4 compound returned 0
Oct 8 23:30:17 myhostname kernel: --> nfsd4_store_cache_entry slot ffff885c72a66000
Oct 8 23:30:17 myhostname kernel: renewing client (clientid 5bbb153f/e87d62f7)
Oct 8 23:30:50 myhostname kernel: NFSD: laundromat service - starting
Oct 8 23:30:50 myhostname kernel: NFSD: purging unused client (clientid e87d62f6)
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: cmd: remove
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: arg: 4c696e7578204e465376342e31206e6f64653033
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: env0: (null)
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: env1: (null)
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: /sbin/nfsdcltrack return value: 0
Oct 8 23:30:50 myhostname kernel: NFSD: laundromat_main - sleeping for 57 seconds
Oct 8 23:31:48 myhostname kernel: NFSD: laundromat service - starting
Oct 8 23:31:48 myhostname kernel: NFSD: purging unused client (clientid e87d62f7)
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: cmd: remove
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: arg: 4c696e7578204e465376342e31206e76696469613031
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: env0: (null)
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: env1: (null)
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: cmd: remove
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: arg: 4c696e7578204e465376342e31206e6f64653033
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: env0: (null)
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: env1: (null)
Oct 8 23:30:50 myhostname kernel: nfsd4_umh_cltrack_upcall: /sbin/nfsdcltrack return value: 0
Oct 8 23:30:50 myhostname kernel: NFSD: laundromat_main - sleeping for 57 seconds
Oct 8 23:31:48 myhostname kernel: NFSD: laundromat service - starting
Oct 8 23:31:48 myhostname kernel: NFSD: purging unused client (clientid e87d62f7)
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: cmd: remove
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: arg: 4c696e7578204e465376342e31206e76696469613031
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: env0: (null)
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: env1: (null)
Oct 8 23:31:48 myhostname kernel: nfsd4_umh_cltrack_upcall: /sbin/nfsdcltrack return value: 0
Oct 8 23:31:48 myhostname kernel: NFSD: laundromat_main - sleeping for 90 seconds
Oct 8 23:33:18 myhostname kernel: NFSD: laundromat service - starting
Oct 8 23:33:18 myhostname kernel: NFSD: laundromat_main - sleeping for 90 seconds
Oct 8 23:34:48 myhostname kernel: NFSD: laundromat service - starting
Oct 8 23:34:48 myhostname kernel: NFSD: laundromat_main - sleeping for 90 seconds

Afterwards it just continues looping the laundromat service starting/sleeping message forever.

nfsstat does not reveal anything weird on the server like badcalls etc.
I’ve also tried to force use NFSv3 version. I got the same problem, however the purging unused client and laundromat message does not appear in the logs now (guessing it was added in v4?).

Now onto some details regarding how the connectivity is.
The main server features 2 network interfaces.
One realtek one (which works by default with the kernel drivers) and one which is nvidia nforce and needs kmod-forcedeth from elrepo.
All server services are on the nvidia-nforce card.
The diskless node and the server connect via a gigabit switch (can’t remember brand name/model sorry).

Isolate DNS server process run in a certain directory

I’m using CentOS on a virtual machine, only the terminal of it, to be more specific. I’m trying to do a task, the one from the description.

I was able to start the DNS Serverr through dnsmasq package that was installed. My question is how can I isolate the process to run in a certain directory? (not root for example).