I had to lower the security in /etc/dirsrv/slapd-INFRA-ALPHACLOUD-AE/dse.ldif by changing minss value to 0
Tuesday, June 30, 2015
rhevm integration with IPA (IDM) - Troubleshooting
I had to lower the security in /etc/dirsrv/slapd-INFRA-ALPHACLOUD-AE/dse.ldif by changing minss value to 0
Thursday, June 25, 2015
All VMs are showing "?" instead of power-up (green) or down (red) status
For any kind of inconsistency on any of the host, use following status:
1) Migrate all the VMs and make sure of it.
2) Cross check VMs on this hosts
# vdsClient -s host02 list table
7dd2737c-df9b-406d-ab53-00788db5005e 29389 de01adaup009 Up 10.205.0.14
774c0144-dc94-47cc-838c-3b0a1607e491 29193 de01adaup012 Up 10.199.107.62
97c26b75-8291-4564-818f-0cb37b98605e 30798 de01addxp014 Up 192.168.5.151
96442b3b-e509-4ba6-aae2-c05d16fcd3d7 27957 cfme02 Up 10.199.102.36
caff5937-d4f6-4fef-a6db-6ead5d7cb884 32197 de01addxshruti Up
a64626c2-78d6-4054-a445-3d93f4867a0f 32529 de01addxt004 Up 10.199.102.41
61994678-bf13-461a-aef3-5b868ac8fc63 14220 Nagios Up
202aafc8-5bb4-4b2d-b7d8-d4736c46aaef 32335 de01addxp021 Up 10.199.102.28
6b9c8ef9-3f0c-4bbd-a42a-074ce7db4da4 32779 de01addxt002 Up 10.199.102.102
ps -ef|grep libvirt|grep listen 3) Start libvirtd again, note that this requires initctl instead of service on RHEL 6: # initctl start libvirtd
4) Wait about 60 seconds and then try the migration again.vdsmd restart
1) Migrate all the VMs and make sure of it.
2) Cross check VMs on this hosts
# vdsClient -s host02 list table
7dd2737c-df9b-406d-ab53-00788db5005e 29389 de01adaup009 Up 10.205.0.14
774c0144-dc94-47cc-838c-3b0a1607e491 29193 de01adaup012 Up 10.199.107.62
97c26b75-8291-4564-818f-0cb37b98605e 30798 de01addxp014 Up 192.168.5.151
96442b3b-e509-4ba6-aae2-c05d16fcd3d7 27957 cfme02 Up 10.199.102.36
caff5937-d4f6-4fef-a6db-6ead5d7cb884 32197 de01addxshruti Up
a64626c2-78d6-4054-a445-3d93f4867a0f 32529 de01addxt004 Up 10.199.102.41
61994678-bf13-461a-aef3-5b868ac8fc63 14220 Nagios Up
202aafc8-5bb4-4b2d-b7d8-d4736c46aaef 32335 de01addxp021 Up 10.199.102.28
6b9c8ef9-3f0c-4bbd-a42a-074ce7db4da4 32779 de01addxt002 Up 10.199.102.102
We can try command line Migration (.NA.)
vdsClient -s 0 migrate vmId=61994678-bf13-461a-aef3-5b868ac8fc63 method=online src=localhost dst=host03
This is the process to avoid any shutdown of VMs:
1) Disable power management for this host. Do this from the webUI by opening the Edit menu for the host, opening the Power Management tab, uncheck "Enable Power Management" and then click ok.
2) On the host, kill the two libvirt processes.
ps -ef|grep libvirt|grep listen 3) Start libvirtd again, note that this requires initctl instead of service on RHEL 6: # initctl start libvirtd
Tuesday, June 23, 2015
VM migration failed!
Need to have a look at vdsm.log on both hosts (source as well as destination).
Use view command to see and search for libvirtError at the end of this log file.
Monday, June 15, 2015
Enable Memory Page Sharing (KSM)
By default, Linux kernel manages KSM so you don't need to worry about memory management. At 80%, kernel turns KSM on and Magic begins. KSM can lower down memory utilization by double and more.
Enable it if you have doubt:
---
vdsClient -s 0 setMOMPolicyParameters ksmEnabled=True
Check if KSM deamons are operational :
-----
for i in ksm ksmtuned; do service $i status ; chkconfig $i --list ; done
Edit below file to see debug (uncomment debugging and log file) info:
---
vi /etc/ksmtuned.conf
Restart Deamons:
---
for i in ksm ksmtuned; do service $i restart ; done
Check log:
---
tail -f /var/log/ksmtuned
and
cat /sys/kernel/mm/ksm/run
1 > means KSM active
0 > inactive KSM
----------
KSM starts only when the threshold free memory limit is
reached i.e above 80% and it will be stopped automatically if the memory usage
is under the threshold limit. So, you may not see KSM as enabled always and therefore the 'Memory
Page Sharing' status as well.
So once you have more load i.e more VMS running on host
the vdsm will 'enable' KSM as require after that it'll reflect 'Memory Page
Sharing' status 'enable' in RHEVM GUI a well.
MoM isn't a specific service, but a vdsm thread.
Restarting vdsm is restarting MoM. To enable/disable KSM, MoM is considering
the policy file /etc/vdsm/mom.d/03-ksm.policy. It hasn't a trivial notation,
but in short, current version is telling MoM to enable KSM when free memory
goes below 20% of total memory.
For more details please refer below kbase,
>> Refer :
Why is KSM not working on my RHEL Virtualization Host?
>> Refer :
In RHEV 3.3, ksmtuned service is stopped. So, how is KSM being controlled?
vNuma in RHEVM
VM has to be down to complete this process. Edit/update VM with the following:
1) Set run on specific host(host with at least two numa nodes)
2) Set Do not migrate VM
3) Choose for example "interleave" mode and NUMA node count equal to some value
(if you choose preferred mode you need to specify this value equal to 1):
Example:
NUMA Node Count = 2
Tune Mode = Interleave
4) Open 'Numa Pinning' menu
5) Drag virtual numa node (vNUMA) from right column to some numa node under on the left, and click OK.
6) Now you already must have numa pinning
I hope it will help.
Sunday, June 7, 2015
Solution 3 : Power Management test Failed \ Hypervisors becoming non-operational or in Error state
You can use following script to fix vdsmd deamon which might stops behaving after 2-4 weeks.
------------------------
[root@RHEVHOST1 ~]# cat /test/bin/vdsmd_refresh.sh
service vdsmd restart > /var/log/vdsm_cron.log1 2>&1
sleep 10
service vdsmd restart > /var/log/vdsm_cron.log2 2>&1
Normally, this command doesn't reboot any VM but it might pause VM for a sec. Self defense mechanism inside VM can reboot itself sometimes. you can migrate such VMs before you run this command.
------------------------
[root@RHEVHOST1 ~]# cat /test/bin/vdsmd_refresh.sh
service vdsmd restart > /var/log/vdsm_cron.log1 2>&1
sleep 10
service vdsmd restart > /var/log/vdsm_cron.log2 2>&1
Normally, this command doesn't reboot any VM but it might pause VM for a sec. Self defense mechanism inside VM can reboot itself sometimes. you can migrate such VMs before you run this command.
Solution 1 : Power Management test Failed \ Hypervisors becoming non-operational or in Error state
Edit and paste below in Host's (under RHEV Manager Console) for power management options:
-------
Enable Power management
Paste this in option's blank space >> lanplus power_wait=4
Then, Test and save
-------
Enable Power management
Paste this in option's blank space >> lanplus power_wait=4
Then, Test and save
Solution 2 : Power Management test Failed \ Hypervisors becoming non-operational or in Error state
Following is method to specify host heartbeat check (by default it's 10) in RHEV Manager config:
--------------------------------------------Check is vdsHeartbeatInSeconds.type has been specified already.
cat /etc/ovirt-engine/engine-config/engine-config.properties |grep vdsHeartbeatInSeconds.type
Add if it doesn't exists (take a backup of engine-config.properties first):
echo vdsHeartbeatInSeconds.type=Integer >> /etc/ovirt-engine/engine-config/engine-config.properties
If all looks good, run below command.
-------
engine-config -s vdsHeartbeatInSeconds=20
engine-config -l